2026-04-17 02:35:33.601307 | Job console starting 2026-04-17 02:35:33.610670 | Updating git repos 2026-04-17 02:35:34.216150 | Cloning repos into workspace 2026-04-17 02:35:34.437668 | Restoring repo states 2026-04-17 02:35:34.459799 | Merging changes 2026-04-17 02:35:34.459821 | Checking out repos 2026-04-17 02:35:34.720951 | Preparing playbooks 2026-04-17 02:35:35.420309 | Running Ansible setup 2026-04-17 02:35:39.837297 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-17 02:35:40.647107 | 2026-04-17 02:35:40.647269 | PLAY [Base pre] 2026-04-17 02:35:40.665060 | 2026-04-17 02:35:40.665195 | TASK [Setup log path fact] 2026-04-17 02:35:40.695662 | orchestrator | ok 2026-04-17 02:35:40.713372 | 2026-04-17 02:35:40.713531 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-17 02:35:40.759031 | orchestrator | ok 2026-04-17 02:35:40.776091 | 2026-04-17 02:35:40.776214 | TASK [emit-job-header : Print job information] 2026-04-17 02:35:40.825159 | # Job Information 2026-04-17 02:35:40.825482 | Ansible Version: 2.16.14 2026-04-17 02:35:40.825549 | Job: testbed-upgrade-stable-ubuntu-24.04 2026-04-17 02:35:40.825608 | Pipeline: periodic-midnight 2026-04-17 02:35:40.825648 | Executor: 521e9411259a 2026-04-17 02:35:40.825685 | Triggered by: https://github.com/osism/testbed 2026-04-17 02:35:40.825722 | Event ID: 3d35b43f50114718a263c5f8a3bb7ce5 2026-04-17 02:35:40.835140 | 2026-04-17 02:35:40.835269 | LOOP [emit-job-header : Print node information] 2026-04-17 02:35:40.959456 | orchestrator | ok: 2026-04-17 02:35:40.959665 | orchestrator | # Node Information 2026-04-17 02:35:40.959700 | orchestrator | Inventory Hostname: orchestrator 2026-04-17 02:35:40.959726 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-17 02:35:40.959749 | orchestrator | Username: zuul-testbed02 2026-04-17 02:35:40.959771 | orchestrator | Distro: Debian 12.13 2026-04-17 02:35:40.959797 | orchestrator | Provider: static-testbed 2026-04-17 02:35:40.959819 | orchestrator | Region: 2026-04-17 02:35:40.959840 | orchestrator | Label: testbed-orchestrator 2026-04-17 02:35:40.959859 | orchestrator | Product Name: OpenStack Nova 2026-04-17 02:35:40.959878 | orchestrator | Interface IP: 81.163.193.140 2026-04-17 02:35:40.985224 | 2026-04-17 02:35:40.985390 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-17 02:35:41.499799 | orchestrator -> localhost | changed 2026-04-17 02:35:41.515525 | 2026-04-17 02:35:41.515691 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-17 02:35:42.617298 | orchestrator -> localhost | changed 2026-04-17 02:35:42.643675 | 2026-04-17 02:35:42.643828 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-17 02:35:42.922746 | orchestrator -> localhost | ok 2026-04-17 02:35:42.938372 | 2026-04-17 02:35:42.938580 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-17 02:35:42.977203 | orchestrator | ok 2026-04-17 02:35:42.997811 | orchestrator | included: /var/lib/zuul/builds/7b8edaf9148748ce8bf9b3adbffd19c3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-17 02:35:43.005905 | 2026-04-17 02:35:43.006005 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-17 02:35:44.737304 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-17 02:35:44.737638 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/7b8edaf9148748ce8bf9b3adbffd19c3/work/7b8edaf9148748ce8bf9b3adbffd19c3_id_rsa 2026-04-17 02:35:44.737710 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/7b8edaf9148748ce8bf9b3adbffd19c3/work/7b8edaf9148748ce8bf9b3adbffd19c3_id_rsa.pub 2026-04-17 02:35:44.737757 | orchestrator -> localhost | The key fingerprint is: 2026-04-17 02:35:44.737798 | orchestrator -> localhost | SHA256:9oIJ7QiCX9aFgCd9WgtyV1YwsE9qu1r1l/uGIYcirY0 zuul-build-sshkey 2026-04-17 02:35:44.737837 | orchestrator -> localhost | The key's randomart image is: 2026-04-17 02:35:44.737892 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-17 02:35:44.737930 | orchestrator -> localhost | | o. .o=o. | 2026-04-17 02:35:44.737966 | orchestrator -> localhost | | + =.++ . | 2026-04-17 02:35:44.737999 | orchestrator -> localhost | | = *o.o | 2026-04-17 02:35:44.738032 | orchestrator -> localhost | |. .o.= | 2026-04-17 02:35:44.738065 | orchestrator -> localhost | |o . + =.S . | 2026-04-17 02:35:44.738100 | orchestrator -> localhost | | o + =.*ooo o. | 2026-04-17 02:35:44.738134 | orchestrator -> localhost | | . . *=..oooo | 2026-04-17 02:35:44.738166 | orchestrator -> localhost | | .E... .... | 2026-04-17 02:35:44.738201 | orchestrator -> localhost | | ... .o. | 2026-04-17 02:35:44.738235 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-17 02:35:44.738312 | orchestrator -> localhost | ok: Runtime: 0:00:01.139644 2026-04-17 02:35:44.749691 | 2026-04-17 02:35:44.749836 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-17 02:35:44.783454 | orchestrator | ok 2026-04-17 02:35:44.796712 | orchestrator | included: /var/lib/zuul/builds/7b8edaf9148748ce8bf9b3adbffd19c3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-17 02:35:44.806142 | 2026-04-17 02:35:44.806240 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-17 02:35:44.829665 | orchestrator | skipping: Conditional result was False 2026-04-17 02:35:44.846213 | 2026-04-17 02:35:44.846387 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-17 02:35:45.520881 | orchestrator | changed 2026-04-17 02:35:45.531185 | 2026-04-17 02:35:45.531325 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-17 02:35:45.855354 | orchestrator | ok 2026-04-17 02:35:45.865340 | 2026-04-17 02:35:45.865498 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-17 02:35:46.335005 | orchestrator | ok 2026-04-17 02:35:46.344289 | 2026-04-17 02:35:46.344468 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-17 02:35:46.818810 | orchestrator | ok 2026-04-17 02:35:46.828652 | 2026-04-17 02:35:46.828800 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-17 02:35:46.853332 | orchestrator | skipping: Conditional result was False 2026-04-17 02:35:46.861792 | 2026-04-17 02:35:46.861912 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-17 02:35:47.304737 | orchestrator -> localhost | changed 2026-04-17 02:35:47.338803 | 2026-04-17 02:35:47.339037 | TASK [add-build-sshkey : Add back temp key] 2026-04-17 02:35:47.712391 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/7b8edaf9148748ce8bf9b3adbffd19c3/work/7b8edaf9148748ce8bf9b3adbffd19c3_id_rsa (zuul-build-sshkey) 2026-04-17 02:35:47.713098 | orchestrator -> localhost | ok: Runtime: 0:00:00.021312 2026-04-17 02:35:47.728282 | 2026-04-17 02:35:47.728544 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-17 02:35:48.185203 | orchestrator | ok 2026-04-17 02:35:48.194255 | 2026-04-17 02:35:48.194389 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-17 02:35:48.228746 | orchestrator | skipping: Conditional result was False 2026-04-17 02:35:48.283026 | 2026-04-17 02:35:48.283161 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-17 02:35:48.726101 | orchestrator | ok 2026-04-17 02:35:48.742570 | 2026-04-17 02:35:48.742771 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-17 02:35:48.787087 | orchestrator | ok 2026-04-17 02:35:48.796934 | 2026-04-17 02:35:48.797056 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-17 02:35:49.105894 | orchestrator -> localhost | ok 2026-04-17 02:35:49.122769 | 2026-04-17 02:35:49.122972 | TASK [validate-host : Collect information about the host] 2026-04-17 02:35:50.373117 | orchestrator | ok 2026-04-17 02:35:50.390156 | 2026-04-17 02:35:50.390298 | TASK [validate-host : Sanitize hostname] 2026-04-17 02:35:50.466606 | orchestrator | ok 2026-04-17 02:35:50.478360 | 2026-04-17 02:35:50.478596 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-17 02:35:51.083063 | orchestrator -> localhost | changed 2026-04-17 02:35:51.097168 | 2026-04-17 02:35:51.097332 | TASK [validate-host : Collect information about zuul worker] 2026-04-17 02:35:51.556135 | orchestrator | ok 2026-04-17 02:35:51.562538 | 2026-04-17 02:35:51.562666 | TASK [validate-host : Write out all zuul information for each host] 2026-04-17 02:35:52.155251 | orchestrator -> localhost | changed 2026-04-17 02:35:52.166152 | 2026-04-17 02:35:52.166279 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-17 02:35:52.474316 | orchestrator | ok 2026-04-17 02:35:52.483131 | 2026-04-17 02:35:52.483257 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-17 02:36:17.046578 | orchestrator | changed: 2026-04-17 02:36:17.046810 | orchestrator | .d..t...... src/ 2026-04-17 02:36:17.046868 | orchestrator | .d..t...... src/github.com/ 2026-04-17 02:36:17.046896 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-17 02:36:17.046918 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-17 02:36:17.046939 | orchestrator | RedHat.yml 2026-04-17 02:36:17.061386 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-17 02:36:17.061415 | orchestrator | RedHat.yml 2026-04-17 02:36:17.061469 | orchestrator | = 1.53.0"... 2026-04-17 02:36:27.540258 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-17 02:36:27.636899 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-17 02:36:28.137493 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-17 02:36:29.063550 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-17 02:36:29.499985 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-17 02:36:30.256410 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-17 02:36:30.324957 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-17 02:36:30.813749 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-17 02:36:30.813881 | orchestrator | 2026-04-17 02:36:30.813891 | orchestrator | Providers are signed by their developers. 2026-04-17 02:36:30.813898 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-17 02:36:30.813904 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-17 02:36:30.815693 | orchestrator | 2026-04-17 02:36:30.815788 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-17 02:36:30.815797 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-17 02:36:30.815818 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-17 02:36:30.815823 | orchestrator | you run "tofu init" in the future. 2026-04-17 02:36:30.815843 | orchestrator | 2026-04-17 02:36:30.815851 | orchestrator | OpenTofu has been successfully initialized! 2026-04-17 02:36:30.815857 | orchestrator | 2026-04-17 02:36:30.815864 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-17 02:36:30.815870 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-17 02:36:30.815876 | orchestrator | should now work. 2026-04-17 02:36:30.815884 | orchestrator | 2026-04-17 02:36:30.815889 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-17 02:36:30.815896 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-17 02:36:30.815903 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-17 02:36:31.550789 | orchestrator | Created and switched to workspace "ci"! 2026-04-17 02:36:31.550873 | orchestrator | 2026-04-17 02:36:31.550884 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-17 02:36:31.550892 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-17 02:36:31.550938 | orchestrator | for this configuration. 2026-04-17 02:36:32.068734 | orchestrator | ci.auto.tfvars 2026-04-17 02:36:32.533285 | orchestrator | default_custom.tf 2026-04-17 02:36:33.520575 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-17 02:36:34.114822 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-17 02:36:34.338104 | orchestrator | 2026-04-17 02:36:34.338177 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-17 02:36:34.338185 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-17 02:36:34.338190 | orchestrator | + create 2026-04-17 02:36:34.338202 | orchestrator | <= read (data resources) 2026-04-17 02:36:34.338207 | orchestrator | 2026-04-17 02:36:34.338211 | orchestrator | OpenTofu will perform the following actions: 2026-04-17 02:36:34.338277 | orchestrator | 2026-04-17 02:36:34.338283 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-17 02:36:34.338287 | orchestrator | # (config refers to values not yet known) 2026-04-17 02:36:34.338291 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-17 02:36:34.338296 | orchestrator | + checksum = (known after apply) 2026-04-17 02:36:34.338300 | orchestrator | + created_at = (known after apply) 2026-04-17 02:36:34.338304 | orchestrator | + file = (known after apply) 2026-04-17 02:36:34.338308 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.338329 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.338333 | orchestrator | + min_disk_gb = (known after apply) 2026-04-17 02:36:34.338337 | orchestrator | + min_ram_mb = (known after apply) 2026-04-17 02:36:34.338341 | orchestrator | + most_recent = true 2026-04-17 02:36:34.338346 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.338349 | orchestrator | + protected = (known after apply) 2026-04-17 02:36:34.338353 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.338360 | orchestrator | + schema = (known after apply) 2026-04-17 02:36:34.338364 | orchestrator | + size_bytes = (known after apply) 2026-04-17 02:36:34.338367 | orchestrator | + tags = (known after apply) 2026-04-17 02:36:34.338371 | orchestrator | + updated_at = (known after apply) 2026-04-17 02:36:34.338375 | orchestrator | } 2026-04-17 02:36:34.338381 | orchestrator | 2026-04-17 02:36:34.338385 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-17 02:36:34.338389 | orchestrator | # (config refers to values not yet known) 2026-04-17 02:36:34.338393 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-17 02:36:34.338397 | orchestrator | + checksum = (known after apply) 2026-04-17 02:36:34.338401 | orchestrator | + created_at = (known after apply) 2026-04-17 02:36:34.338404 | orchestrator | + file = (known after apply) 2026-04-17 02:36:34.338408 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.338412 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.338416 | orchestrator | + min_disk_gb = (known after apply) 2026-04-17 02:36:34.338419 | orchestrator | + min_ram_mb = (known after apply) 2026-04-17 02:36:34.338423 | orchestrator | + most_recent = true 2026-04-17 02:36:34.338427 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.338431 | orchestrator | + protected = (known after apply) 2026-04-17 02:36:34.338435 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.338439 | orchestrator | + schema = (known after apply) 2026-04-17 02:36:34.338442 | orchestrator | + size_bytes = (known after apply) 2026-04-17 02:36:34.338446 | orchestrator | + tags = (known after apply) 2026-04-17 02:36:34.338450 | orchestrator | + updated_at = (known after apply) 2026-04-17 02:36:34.338454 | orchestrator | } 2026-04-17 02:36:34.338470 | orchestrator | 2026-04-17 02:36:34.338474 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-17 02:36:34.338479 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-17 02:36:34.338483 | orchestrator | + content = (known after apply) 2026-04-17 02:36:34.338487 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-17 02:36:34.338491 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-17 02:36:34.338495 | orchestrator | + content_md5 = (known after apply) 2026-04-17 02:36:34.338498 | orchestrator | + content_sha1 = (known after apply) 2026-04-17 02:36:34.338502 | orchestrator | + content_sha256 = (known after apply) 2026-04-17 02:36:34.338506 | orchestrator | + content_sha512 = (known after apply) 2026-04-17 02:36:34.338510 | orchestrator | + directory_permission = "0777" 2026-04-17 02:36:34.338513 | orchestrator | + file_permission = "0644" 2026-04-17 02:36:34.338517 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-17 02:36:34.338521 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.338525 | orchestrator | } 2026-04-17 02:36:34.338593 | orchestrator | 2026-04-17 02:36:34.338600 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-17 02:36:34.338604 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-17 02:36:34.338608 | orchestrator | + content = (known after apply) 2026-04-17 02:36:34.338611 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-17 02:36:34.338615 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-17 02:36:34.338619 | orchestrator | + content_md5 = (known after apply) 2026-04-17 02:36:34.338623 | orchestrator | + content_sha1 = (known after apply) 2026-04-17 02:36:34.338627 | orchestrator | + content_sha256 = (known after apply) 2026-04-17 02:36:34.338630 | orchestrator | + content_sha512 = (known after apply) 2026-04-17 02:36:34.338634 | orchestrator | + directory_permission = "0777" 2026-04-17 02:36:34.338638 | orchestrator | + file_permission = "0644" 2026-04-17 02:36:34.338646 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-17 02:36:34.338650 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.338654 | orchestrator | } 2026-04-17 02:36:34.338726 | orchestrator | 2026-04-17 02:36:34.338736 | orchestrator | # local_file.inventory will be created 2026-04-17 02:36:34.338740 | orchestrator | + resource "local_file" "inventory" { 2026-04-17 02:36:34.338744 | orchestrator | + content = (known after apply) 2026-04-17 02:36:34.338748 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-17 02:36:34.338751 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-17 02:36:34.338755 | orchestrator | + content_md5 = (known after apply) 2026-04-17 02:36:34.338759 | orchestrator | + content_sha1 = (known after apply) 2026-04-17 02:36:34.338763 | orchestrator | + content_sha256 = (known after apply) 2026-04-17 02:36:34.338767 | orchestrator | + content_sha512 = (known after apply) 2026-04-17 02:36:34.338771 | orchestrator | + directory_permission = "0777" 2026-04-17 02:36:34.338774 | orchestrator | + file_permission = "0644" 2026-04-17 02:36:34.338778 | orchestrator | + filename = "inventory.ci" 2026-04-17 02:36:34.338782 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.338786 | orchestrator | } 2026-04-17 02:36:34.338874 | orchestrator | 2026-04-17 02:36:34.338879 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-17 02:36:34.338882 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-17 02:36:34.338887 | orchestrator | + content = (sensitive value) 2026-04-17 02:36:34.338890 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-17 02:36:34.338894 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-17 02:36:34.338898 | orchestrator | + content_md5 = (known after apply) 2026-04-17 02:36:34.338902 | orchestrator | + content_sha1 = (known after apply) 2026-04-17 02:36:34.338906 | orchestrator | + content_sha256 = (known after apply) 2026-04-17 02:36:34.338909 | orchestrator | + content_sha512 = (known after apply) 2026-04-17 02:36:34.338913 | orchestrator | + directory_permission = "0700" 2026-04-17 02:36:34.338917 | orchestrator | + file_permission = "0600" 2026-04-17 02:36:34.338921 | orchestrator | + filename = ".id_rsa.ci" 2026-04-17 02:36:34.338924 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.338928 | orchestrator | } 2026-04-17 02:36:34.338934 | orchestrator | 2026-04-17 02:36:34.338938 | orchestrator | # null_resource.node_semaphore will be created 2026-04-17 02:36:34.338941 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-17 02:36:34.338945 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.338949 | orchestrator | } 2026-04-17 02:36:34.338983 | orchestrator | 2026-04-17 02:36:34.338988 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-17 02:36:34.338992 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-17 02:36:34.338995 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.338999 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339003 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339007 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.339011 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339015 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-17 02:36:34.339058 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339063 | orchestrator | + size = 80 2026-04-17 02:36:34.339066 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339070 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339074 | orchestrator | } 2026-04-17 02:36:34.339080 | orchestrator | 2026-04-17 02:36:34.339084 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-17 02:36:34.339088 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 02:36:34.339092 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339096 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339099 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339108 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.339112 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339116 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-17 02:36:34.339119 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339123 | orchestrator | + size = 80 2026-04-17 02:36:34.339127 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339131 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339135 | orchestrator | } 2026-04-17 02:36:34.339195 | orchestrator | 2026-04-17 02:36:34.339201 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-17 02:36:34.339205 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 02:36:34.339209 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339213 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339216 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339220 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.339224 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339228 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-17 02:36:34.339232 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339235 | orchestrator | + size = 80 2026-04-17 02:36:34.339239 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339243 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339247 | orchestrator | } 2026-04-17 02:36:34.339289 | orchestrator | 2026-04-17 02:36:34.339293 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-17 02:36:34.339297 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 02:36:34.339301 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339305 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339309 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339313 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.339317 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339320 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-17 02:36:34.339324 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339328 | orchestrator | + size = 80 2026-04-17 02:36:34.339332 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339335 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339339 | orchestrator | } 2026-04-17 02:36:34.339423 | orchestrator | 2026-04-17 02:36:34.339428 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-17 02:36:34.339432 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 02:36:34.339436 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339439 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339443 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339447 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.339451 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339458 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-17 02:36:34.339462 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339466 | orchestrator | + size = 80 2026-04-17 02:36:34.339470 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339474 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339478 | orchestrator | } 2026-04-17 02:36:34.339513 | orchestrator | 2026-04-17 02:36:34.339517 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-17 02:36:34.339521 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 02:36:34.339525 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339529 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339532 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339540 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.339544 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339548 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-17 02:36:34.339552 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339556 | orchestrator | + size = 80 2026-04-17 02:36:34.339559 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339563 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339567 | orchestrator | } 2026-04-17 02:36:34.339590 | orchestrator | 2026-04-17 02:36:34.339594 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-17 02:36:34.339598 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 02:36:34.339602 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339606 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339609 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339613 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.339617 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339621 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-17 02:36:34.339625 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339628 | orchestrator | + size = 80 2026-04-17 02:36:34.339632 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339636 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339640 | orchestrator | } 2026-04-17 02:36:34.339672 | orchestrator | 2026-04-17 02:36:34.339677 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-17 02:36:34.339681 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.339685 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339689 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339692 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339696 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339700 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-17 02:36:34.339704 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339708 | orchestrator | + size = 20 2026-04-17 02:36:34.339711 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339715 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339719 | orchestrator | } 2026-04-17 02:36:34.339748 | orchestrator | 2026-04-17 02:36:34.339752 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-17 02:36:34.339756 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.339760 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339764 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339768 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339771 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339775 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-17 02:36:34.339779 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339783 | orchestrator | + size = 20 2026-04-17 02:36:34.339786 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339790 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339794 | orchestrator | } 2026-04-17 02:36:34.339830 | orchestrator | 2026-04-17 02:36:34.339836 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-17 02:36:34.339840 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.339844 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339848 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339852 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339855 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339859 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-17 02:36:34.339863 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339870 | orchestrator | + size = 20 2026-04-17 02:36:34.339874 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339878 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339882 | orchestrator | } 2026-04-17 02:36:34.339906 | orchestrator | 2026-04-17 02:36:34.339910 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-17 02:36:34.339914 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.339918 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339922 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339926 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.339929 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.339934 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-17 02:36:34.339937 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.339941 | orchestrator | + size = 20 2026-04-17 02:36:34.339945 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.339949 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.339952 | orchestrator | } 2026-04-17 02:36:34.339978 | orchestrator | 2026-04-17 02:36:34.339982 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-17 02:36:34.339986 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.339990 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.339994 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.339998 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.340001 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.340005 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-17 02:36:34.340009 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.340016 | orchestrator | + size = 20 2026-04-17 02:36:34.340036 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.340041 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.340044 | orchestrator | } 2026-04-17 02:36:34.340071 | orchestrator | 2026-04-17 02:36:34.340076 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-17 02:36:34.340079 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.340083 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.340087 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.340091 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.340095 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.340098 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-17 02:36:34.340102 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.340106 | orchestrator | + size = 20 2026-04-17 02:36:34.340110 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.340113 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.340117 | orchestrator | } 2026-04-17 02:36:34.340141 | orchestrator | 2026-04-17 02:36:34.340145 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-17 02:36:34.340149 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.340153 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.340157 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.340160 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.340164 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.340168 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-17 02:36:34.340172 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.340176 | orchestrator | + size = 20 2026-04-17 02:36:34.340179 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.340183 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.340187 | orchestrator | } 2026-04-17 02:36:34.340212 | orchestrator | 2026-04-17 02:36:34.340216 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-17 02:36:34.340220 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.340228 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.340232 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.340235 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.340239 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.340243 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-17 02:36:34.340247 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.340251 | orchestrator | + size = 20 2026-04-17 02:36:34.340254 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.340258 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.340262 | orchestrator | } 2026-04-17 02:36:34.340299 | orchestrator | 2026-04-17 02:36:34.340303 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-17 02:36:34.340307 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 02:36:34.340311 | orchestrator | + attachment = (known after apply) 2026-04-17 02:36:34.340315 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.340318 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.340322 | orchestrator | + metadata = (known after apply) 2026-04-17 02:36:34.340326 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-17 02:36:34.340330 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.340333 | orchestrator | + size = 20 2026-04-17 02:36:34.340337 | orchestrator | + volume_retype_policy = "never" 2026-04-17 02:36:34.340341 | orchestrator | + volume_type = "ssd" 2026-04-17 02:36:34.340345 | orchestrator | } 2026-04-17 02:36:34.340577 | orchestrator | 2026-04-17 02:36:34.340583 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-17 02:36:34.340587 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-17 02:36:34.340590 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 02:36:34.340594 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 02:36:34.340598 | orchestrator | + all_metadata = (known after apply) 2026-04-17 02:36:34.340602 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.340606 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.340609 | orchestrator | + config_drive = true 2026-04-17 02:36:34.340613 | orchestrator | + created = (known after apply) 2026-04-17 02:36:34.340617 | orchestrator | + flavor_id = (known after apply) 2026-04-17 02:36:34.340621 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-17 02:36:34.340624 | orchestrator | + force_delete = false 2026-04-17 02:36:34.340628 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 02:36:34.340632 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.340636 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.340639 | orchestrator | + image_name = (known after apply) 2026-04-17 02:36:34.340643 | orchestrator | + key_pair = "testbed" 2026-04-17 02:36:34.340647 | orchestrator | + name = "testbed-manager" 2026-04-17 02:36:34.340651 | orchestrator | + power_state = "active" 2026-04-17 02:36:34.340654 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.340658 | orchestrator | + security_groups = (known after apply) 2026-04-17 02:36:34.340662 | orchestrator | + stop_before_destroy = false 2026-04-17 02:36:34.340666 | orchestrator | + updated = (known after apply) 2026-04-17 02:36:34.340669 | orchestrator | + user_data = (sensitive value) 2026-04-17 02:36:34.340673 | orchestrator | 2026-04-17 02:36:34.340677 | orchestrator | + block_device { 2026-04-17 02:36:34.340681 | orchestrator | + boot_index = 0 2026-04-17 02:36:34.340685 | orchestrator | + delete_on_termination = false 2026-04-17 02:36:34.340692 | orchestrator | + destination_type = "volume" 2026-04-17 02:36:34.340696 | orchestrator | + multiattach = false 2026-04-17 02:36:34.340700 | orchestrator | + source_type = "volume" 2026-04-17 02:36:34.340703 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.340714 | orchestrator | } 2026-04-17 02:36:34.340718 | orchestrator | 2026-04-17 02:36:34.340722 | orchestrator | + network { 2026-04-17 02:36:34.340726 | orchestrator | + access_network = false 2026-04-17 02:36:34.340729 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 02:36:34.340733 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 02:36:34.340737 | orchestrator | + mac = (known after apply) 2026-04-17 02:36:34.340741 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.340745 | orchestrator | + port = (known after apply) 2026-04-17 02:36:34.340748 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.340752 | orchestrator | } 2026-04-17 02:36:34.340756 | orchestrator | } 2026-04-17 02:36:34.340842 | orchestrator | 2026-04-17 02:36:34.340849 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-17 02:36:34.340852 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 02:36:34.340856 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 02:36:34.340860 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 02:36:34.340864 | orchestrator | + all_metadata = (known after apply) 2026-04-17 02:36:34.340867 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.340871 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.340875 | orchestrator | + config_drive = true 2026-04-17 02:36:34.340879 | orchestrator | + created = (known after apply) 2026-04-17 02:36:34.340882 | orchestrator | + flavor_id = (known after apply) 2026-04-17 02:36:34.340886 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 02:36:34.340890 | orchestrator | + force_delete = false 2026-04-17 02:36:34.340893 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 02:36:34.340897 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.340901 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.340905 | orchestrator | + image_name = (known after apply) 2026-04-17 02:36:34.340908 | orchestrator | + key_pair = "testbed" 2026-04-17 02:36:34.340912 | orchestrator | + name = "testbed-node-0" 2026-04-17 02:36:34.340916 | orchestrator | + power_state = "active" 2026-04-17 02:36:34.340920 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.340923 | orchestrator | + security_groups = (known after apply) 2026-04-17 02:36:34.340927 | orchestrator | + stop_before_destroy = false 2026-04-17 02:36:34.340931 | orchestrator | + updated = (known after apply) 2026-04-17 02:36:34.340935 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 02:36:34.340939 | orchestrator | 2026-04-17 02:36:34.340942 | orchestrator | + block_device { 2026-04-17 02:36:34.340946 | orchestrator | + boot_index = 0 2026-04-17 02:36:34.340950 | orchestrator | + delete_on_termination = false 2026-04-17 02:36:34.340953 | orchestrator | + destination_type = "volume" 2026-04-17 02:36:34.340957 | orchestrator | + multiattach = false 2026-04-17 02:36:34.340961 | orchestrator | + source_type = "volume" 2026-04-17 02:36:34.340965 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.340969 | orchestrator | } 2026-04-17 02:36:34.340972 | orchestrator | 2026-04-17 02:36:34.340977 | orchestrator | + network { 2026-04-17 02:36:34.340980 | orchestrator | + access_network = false 2026-04-17 02:36:34.340984 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 02:36:34.340988 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 02:36:34.340992 | orchestrator | + mac = (known after apply) 2026-04-17 02:36:34.340995 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.340999 | orchestrator | + port = (known after apply) 2026-04-17 02:36:34.341003 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.341007 | orchestrator | } 2026-04-17 02:36:34.341010 | orchestrator | } 2026-04-17 02:36:34.341118 | orchestrator | 2026-04-17 02:36:34.341123 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-17 02:36:34.341127 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 02:36:34.341131 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 02:36:34.341139 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 02:36:34.341143 | orchestrator | + all_metadata = (known after apply) 2026-04-17 02:36:34.341147 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.341151 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.341154 | orchestrator | + config_drive = true 2026-04-17 02:36:34.341158 | orchestrator | + created = (known after apply) 2026-04-17 02:36:34.341162 | orchestrator | + flavor_id = (known after apply) 2026-04-17 02:36:34.341166 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 02:36:34.341170 | orchestrator | + force_delete = false 2026-04-17 02:36:34.341173 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 02:36:34.341177 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.341181 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.341184 | orchestrator | + image_name = (known after apply) 2026-04-17 02:36:34.341188 | orchestrator | + key_pair = "testbed" 2026-04-17 02:36:34.341192 | orchestrator | + name = "testbed-node-1" 2026-04-17 02:36:34.341196 | orchestrator | + power_state = "active" 2026-04-17 02:36:34.341200 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.341203 | orchestrator | + security_groups = (known after apply) 2026-04-17 02:36:34.341207 | orchestrator | + stop_before_destroy = false 2026-04-17 02:36:34.341211 | orchestrator | + updated = (known after apply) 2026-04-17 02:36:34.341215 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 02:36:34.341219 | orchestrator | 2026-04-17 02:36:34.341222 | orchestrator | + block_device { 2026-04-17 02:36:34.341226 | orchestrator | + boot_index = 0 2026-04-17 02:36:34.341230 | orchestrator | + delete_on_termination = false 2026-04-17 02:36:34.341234 | orchestrator | + destination_type = "volume" 2026-04-17 02:36:34.341238 | orchestrator | + multiattach = false 2026-04-17 02:36:34.341241 | orchestrator | + source_type = "volume" 2026-04-17 02:36:34.341245 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.341249 | orchestrator | } 2026-04-17 02:36:34.341253 | orchestrator | 2026-04-17 02:36:34.341256 | orchestrator | + network { 2026-04-17 02:36:34.341260 | orchestrator | + access_network = false 2026-04-17 02:36:34.341264 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 02:36:34.341268 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 02:36:34.341271 | orchestrator | + mac = (known after apply) 2026-04-17 02:36:34.341275 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.341279 | orchestrator | + port = (known after apply) 2026-04-17 02:36:34.341283 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.341286 | orchestrator | } 2026-04-17 02:36:34.341290 | orchestrator | } 2026-04-17 02:36:34.341377 | orchestrator | 2026-04-17 02:36:34.341382 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-17 02:36:34.341386 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 02:36:34.341390 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 02:36:34.341394 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 02:36:34.341398 | orchestrator | + all_metadata = (known after apply) 2026-04-17 02:36:34.341402 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.341415 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.341419 | orchestrator | + config_drive = true 2026-04-17 02:36:34.341423 | orchestrator | + created = (known after apply) 2026-04-17 02:36:34.341427 | orchestrator | + flavor_id = (known after apply) 2026-04-17 02:36:34.341431 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 02:36:34.341434 | orchestrator | + force_delete = false 2026-04-17 02:36:34.341438 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 02:36:34.341442 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.341446 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.341454 | orchestrator | + image_name = (known after apply) 2026-04-17 02:36:34.341458 | orchestrator | + key_pair = "testbed" 2026-04-17 02:36:34.341462 | orchestrator | + name = "testbed-node-2" 2026-04-17 02:36:34.341465 | orchestrator | + power_state = "active" 2026-04-17 02:36:34.341469 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.341473 | orchestrator | + security_groups = (known after apply) 2026-04-17 02:36:34.341477 | orchestrator | + stop_before_destroy = false 2026-04-17 02:36:34.341480 | orchestrator | + updated = (known after apply) 2026-04-17 02:36:34.341484 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 02:36:34.341488 | orchestrator | 2026-04-17 02:36:34.341492 | orchestrator | + block_device { 2026-04-17 02:36:34.341496 | orchestrator | + boot_index = 0 2026-04-17 02:36:34.341499 | orchestrator | + delete_on_termination = false 2026-04-17 02:36:34.341503 | orchestrator | + destination_type = "volume" 2026-04-17 02:36:34.341507 | orchestrator | + multiattach = false 2026-04-17 02:36:34.341510 | orchestrator | + source_type = "volume" 2026-04-17 02:36:34.341514 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.341518 | orchestrator | } 2026-04-17 02:36:34.341522 | orchestrator | 2026-04-17 02:36:34.341525 | orchestrator | + network { 2026-04-17 02:36:34.341529 | orchestrator | + access_network = false 2026-04-17 02:36:34.341533 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 02:36:34.341537 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 02:36:34.341540 | orchestrator | + mac = (known after apply) 2026-04-17 02:36:34.341544 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.341548 | orchestrator | + port = (known after apply) 2026-04-17 02:36:34.341552 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.341556 | orchestrator | } 2026-04-17 02:36:34.341559 | orchestrator | } 2026-04-17 02:36:34.341650 | orchestrator | 2026-04-17 02:36:34.341655 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-17 02:36:34.341659 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 02:36:34.341663 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 02:36:34.341667 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 02:36:34.341670 | orchestrator | + all_metadata = (known after apply) 2026-04-17 02:36:34.341674 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.341678 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.341682 | orchestrator | + config_drive = true 2026-04-17 02:36:34.341685 | orchestrator | + created = (known after apply) 2026-04-17 02:36:34.341689 | orchestrator | + flavor_id = (known after apply) 2026-04-17 02:36:34.341693 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 02:36:34.341697 | orchestrator | + force_delete = false 2026-04-17 02:36:34.341700 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 02:36:34.341704 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.341708 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.341712 | orchestrator | + image_name = (known after apply) 2026-04-17 02:36:34.341715 | orchestrator | + key_pair = "testbed" 2026-04-17 02:36:34.341719 | orchestrator | + name = "testbed-node-3" 2026-04-17 02:36:34.341723 | orchestrator | + power_state = "active" 2026-04-17 02:36:34.341727 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.341730 | orchestrator | + security_groups = (known after apply) 2026-04-17 02:36:34.341734 | orchestrator | + stop_before_destroy = false 2026-04-17 02:36:34.341738 | orchestrator | + updated = (known after apply) 2026-04-17 02:36:34.341742 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 02:36:34.341746 | orchestrator | 2026-04-17 02:36:34.341749 | orchestrator | + block_device { 2026-04-17 02:36:34.341756 | orchestrator | + boot_index = 0 2026-04-17 02:36:34.341760 | orchestrator | + delete_on_termination = false 2026-04-17 02:36:34.341763 | orchestrator | + destination_type = "volume" 2026-04-17 02:36:34.341770 | orchestrator | + multiattach = false 2026-04-17 02:36:34.341774 | orchestrator | + source_type = "volume" 2026-04-17 02:36:34.341778 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.341781 | orchestrator | } 2026-04-17 02:36:34.341785 | orchestrator | 2026-04-17 02:36:34.341789 | orchestrator | + network { 2026-04-17 02:36:34.341793 | orchestrator | + access_network = false 2026-04-17 02:36:34.341796 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 02:36:34.341800 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 02:36:34.341804 | orchestrator | + mac = (known after apply) 2026-04-17 02:36:34.341808 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.341811 | orchestrator | + port = (known after apply) 2026-04-17 02:36:34.341815 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.341819 | orchestrator | } 2026-04-17 02:36:34.341823 | orchestrator | } 2026-04-17 02:36:34.341912 | orchestrator | 2026-04-17 02:36:34.341916 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-17 02:36:34.341920 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 02:36:34.341924 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 02:36:34.341928 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 02:36:34.341932 | orchestrator | + all_metadata = (known after apply) 2026-04-17 02:36:34.341936 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.341940 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.341943 | orchestrator | + config_drive = true 2026-04-17 02:36:34.341947 | orchestrator | + created = (known after apply) 2026-04-17 02:36:34.341951 | orchestrator | + flavor_id = (known after apply) 2026-04-17 02:36:34.341954 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 02:36:34.341958 | orchestrator | + force_delete = false 2026-04-17 02:36:34.341962 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 02:36:34.341966 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.341969 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.341973 | orchestrator | + image_name = (known after apply) 2026-04-17 02:36:34.341977 | orchestrator | + key_pair = "testbed" 2026-04-17 02:36:34.341981 | orchestrator | + name = "testbed-node-4" 2026-04-17 02:36:34.341984 | orchestrator | + power_state = "active" 2026-04-17 02:36:34.341988 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.341992 | orchestrator | + security_groups = (known after apply) 2026-04-17 02:36:34.341995 | orchestrator | + stop_before_destroy = false 2026-04-17 02:36:34.341999 | orchestrator | + updated = (known after apply) 2026-04-17 02:36:34.342003 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 02:36:34.342007 | orchestrator | 2026-04-17 02:36:34.342011 | orchestrator | + block_device { 2026-04-17 02:36:34.342049 | orchestrator | + boot_index = 0 2026-04-17 02:36:34.342055 | orchestrator | + delete_on_termination = false 2026-04-17 02:36:34.342058 | orchestrator | + destination_type = "volume" 2026-04-17 02:36:34.342062 | orchestrator | + multiattach = false 2026-04-17 02:36:34.342066 | orchestrator | + source_type = "volume" 2026-04-17 02:36:34.342070 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.342073 | orchestrator | } 2026-04-17 02:36:34.342077 | orchestrator | 2026-04-17 02:36:34.342081 | orchestrator | + network { 2026-04-17 02:36:34.342085 | orchestrator | + access_network = false 2026-04-17 02:36:34.342088 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 02:36:34.342092 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 02:36:34.342096 | orchestrator | + mac = (known after apply) 2026-04-17 02:36:34.342100 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.342103 | orchestrator | + port = (known after apply) 2026-04-17 02:36:34.342107 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.342111 | orchestrator | } 2026-04-17 02:36:34.342115 | orchestrator | } 2026-04-17 02:36:34.342328 | orchestrator | 2026-04-17 02:36:34.342339 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-17 02:36:34.342343 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 02:36:34.342347 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 02:36:34.342350 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 02:36:34.342354 | orchestrator | + all_metadata = (known after apply) 2026-04-17 02:36:34.342358 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.342362 | orchestrator | + availability_zone = "nova" 2026-04-17 02:36:34.342365 | orchestrator | + config_drive = true 2026-04-17 02:36:34.342369 | orchestrator | + created = (known after apply) 2026-04-17 02:36:34.342373 | orchestrator | + flavor_id = (known after apply) 2026-04-17 02:36:34.342377 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 02:36:34.342381 | orchestrator | + force_delete = false 2026-04-17 02:36:34.342388 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 02:36:34.342392 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342395 | orchestrator | + image_id = (known after apply) 2026-04-17 02:36:34.342399 | orchestrator | + image_name = (known after apply) 2026-04-17 02:36:34.342403 | orchestrator | + key_pair = "testbed" 2026-04-17 02:36:34.342407 | orchestrator | + name = "testbed-node-5" 2026-04-17 02:36:34.342410 | orchestrator | + power_state = "active" 2026-04-17 02:36:34.342414 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342418 | orchestrator | + security_groups = (known after apply) 2026-04-17 02:36:34.342422 | orchestrator | + stop_before_destroy = false 2026-04-17 02:36:34.342425 | orchestrator | + updated = (known after apply) 2026-04-17 02:36:34.342429 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 02:36:34.342433 | orchestrator | 2026-04-17 02:36:34.342437 | orchestrator | + block_device { 2026-04-17 02:36:34.342440 | orchestrator | + boot_index = 0 2026-04-17 02:36:34.342444 | orchestrator | + delete_on_termination = false 2026-04-17 02:36:34.342448 | orchestrator | + destination_type = "volume" 2026-04-17 02:36:34.342452 | orchestrator | + multiattach = false 2026-04-17 02:36:34.342455 | orchestrator | + source_type = "volume" 2026-04-17 02:36:34.342459 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.342463 | orchestrator | } 2026-04-17 02:36:34.342467 | orchestrator | 2026-04-17 02:36:34.342471 | orchestrator | + network { 2026-04-17 02:36:34.342474 | orchestrator | + access_network = false 2026-04-17 02:36:34.342478 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 02:36:34.342482 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 02:36:34.342486 | orchestrator | + mac = (known after apply) 2026-04-17 02:36:34.342490 | orchestrator | + name = (known after apply) 2026-04-17 02:36:34.342493 | orchestrator | + port = (known after apply) 2026-04-17 02:36:34.342497 | orchestrator | + uuid = (known after apply) 2026-04-17 02:36:34.342501 | orchestrator | } 2026-04-17 02:36:34.342505 | orchestrator | } 2026-04-17 02:36:34.342510 | orchestrator | 2026-04-17 02:36:34.342514 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-17 02:36:34.342518 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-17 02:36:34.342522 | orchestrator | + fingerprint = (known after apply) 2026-04-17 02:36:34.342525 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342529 | orchestrator | + name = "testbed" 2026-04-17 02:36:34.342533 | orchestrator | + private_key = (sensitive value) 2026-04-17 02:36:34.342537 | orchestrator | + public_key = (known after apply) 2026-04-17 02:36:34.342540 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342544 | orchestrator | + user_id = (known after apply) 2026-04-17 02:36:34.342548 | orchestrator | } 2026-04-17 02:36:34.342552 | orchestrator | 2026-04-17 02:36:34.342556 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-17 02:36:34.342560 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342568 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342572 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342575 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342579 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342583 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342587 | orchestrator | } 2026-04-17 02:36:34.342590 | orchestrator | 2026-04-17 02:36:34.342594 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-17 02:36:34.342598 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342602 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342606 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342609 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342613 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342617 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342621 | orchestrator | } 2026-04-17 02:36:34.342626 | orchestrator | 2026-04-17 02:36:34.342630 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-17 02:36:34.342634 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342638 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342641 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342645 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342649 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342653 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342656 | orchestrator | } 2026-04-17 02:36:34.342660 | orchestrator | 2026-04-17 02:36:34.342664 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-17 02:36:34.342668 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342672 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342675 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342679 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342683 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342686 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342690 | orchestrator | } 2026-04-17 02:36:34.342695 | orchestrator | 2026-04-17 02:36:34.342699 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-17 02:36:34.342703 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342707 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342711 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342714 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342721 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342727 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342733 | orchestrator | } 2026-04-17 02:36:34.342742 | orchestrator | 2026-04-17 02:36:34.342748 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-17 02:36:34.342755 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342762 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342768 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342774 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342779 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342785 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342791 | orchestrator | } 2026-04-17 02:36:34.342799 | orchestrator | 2026-04-17 02:36:34.342805 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-17 02:36:34.342811 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342818 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342824 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342830 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342836 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342847 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342853 | orchestrator | } 2026-04-17 02:36:34.342860 | orchestrator | 2026-04-17 02:36:34.342866 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-17 02:36:34.342872 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342878 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342884 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342889 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342896 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342901 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342907 | orchestrator | } 2026-04-17 02:36:34.342916 | orchestrator | 2026-04-17 02:36:34.342923 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-17 02:36:34.342930 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 02:36:34.342937 | orchestrator | + device = (known after apply) 2026-04-17 02:36:34.342941 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342945 | orchestrator | + instance_id = (known after apply) 2026-04-17 02:36:34.342949 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342952 | orchestrator | + volume_id = (known after apply) 2026-04-17 02:36:34.342956 | orchestrator | } 2026-04-17 02:36:34.342960 | orchestrator | 2026-04-17 02:36:34.342964 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-17 02:36:34.342968 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-17 02:36:34.342972 | orchestrator | + fixed_ip = (known after apply) 2026-04-17 02:36:34.342976 | orchestrator | + floating_ip = (known after apply) 2026-04-17 02:36:34.342980 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.342984 | orchestrator | + port_id = (known after apply) 2026-04-17 02:36:34.342988 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.342992 | orchestrator | } 2026-04-17 02:36:34.342998 | orchestrator | 2026-04-17 02:36:34.343003 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-17 02:36:34.343008 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-17 02:36:34.343012 | orchestrator | + address = (known after apply) 2026-04-17 02:36:34.343064 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.343071 | orchestrator | + dns_domain = (known after apply) 2026-04-17 02:36:34.343076 | orchestrator | + dns_name = (known after apply) 2026-04-17 02:36:34.343081 | orchestrator | + fixed_ip = (known after apply) 2026-04-17 02:36:34.343086 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.343091 | orchestrator | + pool = "public" 2026-04-17 02:36:34.343096 | orchestrator | + port_id = (known after apply) 2026-04-17 02:36:34.343101 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.343106 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.343110 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.343115 | orchestrator | } 2026-04-17 02:36:34.343179 | orchestrator | 2026-04-17 02:36:34.343187 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-17 02:36:34.343191 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-17 02:36:34.343196 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.343201 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.343205 | orchestrator | + availability_zone_hints = [ 2026-04-17 02:36:34.343210 | orchestrator | + "nova", 2026-04-17 02:36:34.343215 | orchestrator | ] 2026-04-17 02:36:34.343219 | orchestrator | + dns_domain = (known after apply) 2026-04-17 02:36:34.343224 | orchestrator | + external = (known after apply) 2026-04-17 02:36:34.343228 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.343233 | orchestrator | + mtu = (known after apply) 2026-04-17 02:36:34.343237 | orchestrator | + name = "net-testbed-management" 2026-04-17 02:36:34.343242 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 02:36:34.343262 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 02:36:34.343267 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.343271 | orchestrator | + shared = (known after apply) 2026-04-17 02:36:34.343276 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.343281 | orchestrator | + transparent_vlan = (known after apply) 2026-04-17 02:36:34.343286 | orchestrator | 2026-04-17 02:36:34.343290 | orchestrator | + segments (known after apply) 2026-04-17 02:36:34.343295 | orchestrator | } 2026-04-17 02:36:34.343570 | orchestrator | 2026-04-17 02:36:34.343638 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-17 02:36:34.343646 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-17 02:36:34.343651 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.343655 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 02:36:34.343659 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 02:36:34.343673 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.343678 | orchestrator | + device_id = (known after apply) 2026-04-17 02:36:34.343682 | orchestrator | + device_owner = (known after apply) 2026-04-17 02:36:34.343686 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 02:36:34.343690 | orchestrator | + dns_name = (known after apply) 2026-04-17 02:36:34.343694 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.343698 | orchestrator | + mac_address = (known after apply) 2026-04-17 02:36:34.343702 | orchestrator | + network_id = (known after apply) 2026-04-17 02:36:34.343706 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 02:36:34.343710 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 02:36:34.343713 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.343717 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 02:36:34.343721 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.343725 | orchestrator | 2026-04-17 02:36:34.343729 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.343733 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 02:36:34.343737 | orchestrator | } 2026-04-17 02:36:34.343741 | orchestrator | 2026-04-17 02:36:34.343745 | orchestrator | + binding (known after apply) 2026-04-17 02:36:34.343750 | orchestrator | 2026-04-17 02:36:34.343754 | orchestrator | + fixed_ip { 2026-04-17 02:36:34.343758 | orchestrator | + ip_address = "192.168.16.5" 2026-04-17 02:36:34.343762 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.343766 | orchestrator | } 2026-04-17 02:36:34.343770 | orchestrator | } 2026-04-17 02:36:34.343782 | orchestrator | 2026-04-17 02:36:34.343787 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-17 02:36:34.343791 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 02:36:34.343794 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.343798 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 02:36:34.343802 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 02:36:34.343806 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.343810 | orchestrator | + device_id = (known after apply) 2026-04-17 02:36:34.343813 | orchestrator | + device_owner = (known after apply) 2026-04-17 02:36:34.343817 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 02:36:34.343821 | orchestrator | + dns_name = (known after apply) 2026-04-17 02:36:34.343825 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.343829 | orchestrator | + mac_address = (known after apply) 2026-04-17 02:36:34.343833 | orchestrator | + network_id = (known after apply) 2026-04-17 02:36:34.343836 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 02:36:34.343840 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 02:36:34.343844 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.343860 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 02:36:34.343864 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.343868 | orchestrator | 2026-04-17 02:36:34.343872 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.343876 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 02:36:34.343880 | orchestrator | } 2026-04-17 02:36:34.343884 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.343888 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 02:36:34.343892 | orchestrator | } 2026-04-17 02:36:34.343895 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.343900 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 02:36:34.343905 | orchestrator | } 2026-04-17 02:36:34.343911 | orchestrator | 2026-04-17 02:36:34.343917 | orchestrator | + binding (known after apply) 2026-04-17 02:36:34.343923 | orchestrator | 2026-04-17 02:36:34.343929 | orchestrator | + fixed_ip { 2026-04-17 02:36:34.343935 | orchestrator | + ip_address = "192.168.16.10" 2026-04-17 02:36:34.343941 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.343947 | orchestrator | } 2026-04-17 02:36:34.343953 | orchestrator | } 2026-04-17 02:36:34.344006 | orchestrator | 2026-04-17 02:36:34.344043 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-17 02:36:34.344051 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 02:36:34.344058 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.344064 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 02:36:34.344070 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 02:36:34.344076 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.344080 | orchestrator | + device_id = (known after apply) 2026-04-17 02:36:34.344084 | orchestrator | + device_owner = (known after apply) 2026-04-17 02:36:34.344088 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 02:36:34.344092 | orchestrator | + dns_name = (known after apply) 2026-04-17 02:36:34.344096 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.344099 | orchestrator | + mac_address = (known after apply) 2026-04-17 02:36:34.344103 | orchestrator | + network_id = (known after apply) 2026-04-17 02:36:34.344107 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 02:36:34.344111 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 02:36:34.344115 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.344119 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 02:36:34.344122 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.344126 | orchestrator | 2026-04-17 02:36:34.344130 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344134 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 02:36:34.344138 | orchestrator | } 2026-04-17 02:36:34.344141 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344146 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 02:36:34.344149 | orchestrator | } 2026-04-17 02:36:34.344153 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344157 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 02:36:34.344161 | orchestrator | } 2026-04-17 02:36:34.344165 | orchestrator | 2026-04-17 02:36:34.344169 | orchestrator | + binding (known after apply) 2026-04-17 02:36:34.344173 | orchestrator | 2026-04-17 02:36:34.344176 | orchestrator | + fixed_ip { 2026-04-17 02:36:34.344180 | orchestrator | + ip_address = "192.168.16.11" 2026-04-17 02:36:34.344184 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.344188 | orchestrator | } 2026-04-17 02:36:34.344191 | orchestrator | } 2026-04-17 02:36:34.344241 | orchestrator | 2026-04-17 02:36:34.344247 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-17 02:36:34.344250 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 02:36:34.344254 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.344258 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 02:36:34.344262 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 02:36:34.344266 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.344276 | orchestrator | + device_id = (known after apply) 2026-04-17 02:36:34.344281 | orchestrator | + device_owner = (known after apply) 2026-04-17 02:36:34.344285 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 02:36:34.344289 | orchestrator | + dns_name = (known after apply) 2026-04-17 02:36:34.344299 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.344303 | orchestrator | + mac_address = (known after apply) 2026-04-17 02:36:34.344307 | orchestrator | + network_id = (known after apply) 2026-04-17 02:36:34.344311 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 02:36:34.344315 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 02:36:34.344319 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.344323 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 02:36:34.344327 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.344330 | orchestrator | 2026-04-17 02:36:34.344334 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344338 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 02:36:34.344342 | orchestrator | } 2026-04-17 02:36:34.344346 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344361 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 02:36:34.344365 | orchestrator | } 2026-04-17 02:36:34.344369 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344373 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 02:36:34.344377 | orchestrator | } 2026-04-17 02:36:34.344380 | orchestrator | 2026-04-17 02:36:34.344384 | orchestrator | + binding (known after apply) 2026-04-17 02:36:34.344389 | orchestrator | 2026-04-17 02:36:34.344392 | orchestrator | + fixed_ip { 2026-04-17 02:36:34.344396 | orchestrator | + ip_address = "192.168.16.12" 2026-04-17 02:36:34.344400 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.344404 | orchestrator | } 2026-04-17 02:36:34.344408 | orchestrator | } 2026-04-17 02:36:34.344428 | orchestrator | 2026-04-17 02:36:34.344435 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-17 02:36:34.344439 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 02:36:34.344443 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.344447 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 02:36:34.344451 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 02:36:34.344455 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.344459 | orchestrator | + device_id = (known after apply) 2026-04-17 02:36:34.344462 | orchestrator | + device_owner = (known after apply) 2026-04-17 02:36:34.344466 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 02:36:34.344470 | orchestrator | + dns_name = (known after apply) 2026-04-17 02:36:34.344474 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.344478 | orchestrator | + mac_address = (known after apply) 2026-04-17 02:36:34.344482 | orchestrator | + network_id = (known after apply) 2026-04-17 02:36:34.344486 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 02:36:34.344490 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 02:36:34.344494 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.344497 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 02:36:34.344501 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.344505 | orchestrator | 2026-04-17 02:36:34.344509 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344513 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 02:36:34.344517 | orchestrator | } 2026-04-17 02:36:34.344520 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344524 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 02:36:34.344528 | orchestrator | } 2026-04-17 02:36:34.344532 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344536 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 02:36:34.344539 | orchestrator | } 2026-04-17 02:36:34.344543 | orchestrator | 2026-04-17 02:36:34.344551 | orchestrator | + binding (known after apply) 2026-04-17 02:36:34.344555 | orchestrator | 2026-04-17 02:36:34.344559 | orchestrator | + fixed_ip { 2026-04-17 02:36:34.344562 | orchestrator | + ip_address = "192.168.16.13" 2026-04-17 02:36:34.344566 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.344570 | orchestrator | } 2026-04-17 02:36:34.344574 | orchestrator | } 2026-04-17 02:36:34.344623 | orchestrator | 2026-04-17 02:36:34.344630 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-17 02:36:34.344634 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 02:36:34.344638 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.344642 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 02:36:34.344646 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 02:36:34.344650 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.344654 | orchestrator | + device_id = (known after apply) 2026-04-17 02:36:34.344658 | orchestrator | + device_owner = (known after apply) 2026-04-17 02:36:34.344661 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 02:36:34.344665 | orchestrator | + dns_name = (known after apply) 2026-04-17 02:36:34.344669 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.344673 | orchestrator | + mac_address = (known after apply) 2026-04-17 02:36:34.344677 | orchestrator | + network_id = (known after apply) 2026-04-17 02:36:34.344680 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 02:36:34.344684 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 02:36:34.344688 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.344692 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 02:36:34.344696 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.344702 | orchestrator | 2026-04-17 02:36:34.344706 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344710 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 02:36:34.344714 | orchestrator | } 2026-04-17 02:36:34.344718 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344721 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 02:36:34.344725 | orchestrator | } 2026-04-17 02:36:34.344729 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344742 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 02:36:34.344746 | orchestrator | } 2026-04-17 02:36:34.344750 | orchestrator | 2026-04-17 02:36:34.344753 | orchestrator | + binding (known after apply) 2026-04-17 02:36:34.344757 | orchestrator | 2026-04-17 02:36:34.344761 | orchestrator | + fixed_ip { 2026-04-17 02:36:34.344765 | orchestrator | + ip_address = "192.168.16.14" 2026-04-17 02:36:34.344769 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.344773 | orchestrator | } 2026-04-17 02:36:34.344777 | orchestrator | } 2026-04-17 02:36:34.344827 | orchestrator | 2026-04-17 02:36:34.344833 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-17 02:36:34.344837 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 02:36:34.344841 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.344845 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 02:36:34.344849 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 02:36:34.344853 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.344857 | orchestrator | + device_id = (known after apply) 2026-04-17 02:36:34.344861 | orchestrator | + device_owner = (known after apply) 2026-04-17 02:36:34.344866 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 02:36:34.344870 | orchestrator | + dns_name = (known after apply) 2026-04-17 02:36:34.344874 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.344878 | orchestrator | + mac_address = (known after apply) 2026-04-17 02:36:34.344882 | orchestrator | + network_id = (known after apply) 2026-04-17 02:36:34.344886 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 02:36:34.344890 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 02:36:34.344902 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.344905 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 02:36:34.344909 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.344914 | orchestrator | 2026-04-17 02:36:34.344917 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344921 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 02:36:34.344925 | orchestrator | } 2026-04-17 02:36:34.344929 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344933 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 02:36:34.344937 | orchestrator | } 2026-04-17 02:36:34.344950 | orchestrator | + allowed_address_pairs { 2026-04-17 02:36:34.344954 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 02:36:34.344958 | orchestrator | } 2026-04-17 02:36:34.344962 | orchestrator | 2026-04-17 02:36:34.344970 | orchestrator | + binding (known after apply) 2026-04-17 02:36:34.344975 | orchestrator | 2026-04-17 02:36:34.344978 | orchestrator | + fixed_ip { 2026-04-17 02:36:34.344982 | orchestrator | + ip_address = "192.168.16.15" 2026-04-17 02:36:34.344986 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.344990 | orchestrator | } 2026-04-17 02:36:34.344994 | orchestrator | } 2026-04-17 02:36:34.345000 | orchestrator | 2026-04-17 02:36:34.345004 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-17 02:36:34.345008 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-17 02:36:34.345012 | orchestrator | + force_destroy = false 2026-04-17 02:36:34.345048 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345056 | orchestrator | + port_id = (known after apply) 2026-04-17 02:36:34.345060 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345064 | orchestrator | + router_id = (known after apply) 2026-04-17 02:36:34.345068 | orchestrator | + subnet_id = (known after apply) 2026-04-17 02:36:34.345071 | orchestrator | } 2026-04-17 02:36:34.345078 | orchestrator | 2026-04-17 02:36:34.345082 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-17 02:36:34.345086 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-17 02:36:34.345089 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 02:36:34.345093 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.345097 | orchestrator | + availability_zone_hints = [ 2026-04-17 02:36:34.345146 | orchestrator | + "nova", 2026-04-17 02:36:34.345150 | orchestrator | ] 2026-04-17 02:36:34.345154 | orchestrator | + distributed = (known after apply) 2026-04-17 02:36:34.345158 | orchestrator | + enable_snat = (known after apply) 2026-04-17 02:36:34.345162 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-17 02:36:34.345166 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-17 02:36:34.345170 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345173 | orchestrator | + name = "testbed" 2026-04-17 02:36:34.345178 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345182 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345186 | orchestrator | 2026-04-17 02:36:34.345190 | orchestrator | + external_fixed_ip (known after apply) 2026-04-17 02:36:34.345194 | orchestrator | } 2026-04-17 02:36:34.345200 | orchestrator | 2026-04-17 02:36:34.345204 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-17 02:36:34.345209 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-17 02:36:34.345213 | orchestrator | + description = "ssh" 2026-04-17 02:36:34.345217 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345221 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345225 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345229 | orchestrator | + port_range_max = 22 2026-04-17 02:36:34.345233 | orchestrator | + port_range_min = 22 2026-04-17 02:36:34.345237 | orchestrator | + protocol = "tcp" 2026-04-17 02:36:34.345241 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345251 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345254 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345258 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 02:36:34.345262 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345266 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345269 | orchestrator | } 2026-04-17 02:36:34.345275 | orchestrator | 2026-04-17 02:36:34.345279 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-17 02:36:34.345283 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-17 02:36:34.345286 | orchestrator | + description = "wireguard" 2026-04-17 02:36:34.345290 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345294 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345298 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345302 | orchestrator | + port_range_max = 51820 2026-04-17 02:36:34.345305 | orchestrator | + port_range_min = 51820 2026-04-17 02:36:34.345309 | orchestrator | + protocol = "udp" 2026-04-17 02:36:34.345313 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345317 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345321 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345325 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 02:36:34.345329 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345333 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345336 | orchestrator | } 2026-04-17 02:36:34.345342 | orchestrator | 2026-04-17 02:36:34.345346 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-17 02:36:34.345350 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-17 02:36:34.345354 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345358 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345364 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345370 | orchestrator | + protocol = "tcp" 2026-04-17 02:36:34.345377 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345383 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345394 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345403 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-17 02:36:34.345410 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345416 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345423 | orchestrator | } 2026-04-17 02:36:34.345432 | orchestrator | 2026-04-17 02:36:34.345439 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-17 02:36:34.345447 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-17 02:36:34.345454 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345458 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345462 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345465 | orchestrator | + protocol = "udp" 2026-04-17 02:36:34.345469 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345473 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345477 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345481 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-17 02:36:34.345485 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345489 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345493 | orchestrator | } 2026-04-17 02:36:34.345498 | orchestrator | 2026-04-17 02:36:34.345502 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-17 02:36:34.345511 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-17 02:36:34.345514 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345518 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345522 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345526 | orchestrator | + protocol = "icmp" 2026-04-17 02:36:34.345530 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345534 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345538 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345541 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 02:36:34.345545 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345549 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345553 | orchestrator | } 2026-04-17 02:36:34.345559 | orchestrator | 2026-04-17 02:36:34.345563 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-17 02:36:34.345567 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-17 02:36:34.345571 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345574 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345578 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345582 | orchestrator | + protocol = "tcp" 2026-04-17 02:36:34.345586 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345590 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345598 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345603 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 02:36:34.345607 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345611 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345615 | orchestrator | } 2026-04-17 02:36:34.345621 | orchestrator | 2026-04-17 02:36:34.345625 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-17 02:36:34.345629 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-17 02:36:34.345633 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345637 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345641 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345645 | orchestrator | + protocol = "udp" 2026-04-17 02:36:34.345649 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345653 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345657 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345661 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 02:36:34.345664 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345669 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345673 | orchestrator | } 2026-04-17 02:36:34.345680 | orchestrator | 2026-04-17 02:36:34.345686 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-17 02:36:34.345690 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-17 02:36:34.345694 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345701 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345705 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345709 | orchestrator | + protocol = "icmp" 2026-04-17 02:36:34.345713 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345717 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345721 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345725 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 02:36:34.345729 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345733 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345743 | orchestrator | } 2026-04-17 02:36:34.345767 | orchestrator | 2026-04-17 02:36:34.345773 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-17 02:36:34.345777 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-17 02:36:34.345781 | orchestrator | + description = "vrrp" 2026-04-17 02:36:34.345788 | orchestrator | + direction = "ingress" 2026-04-17 02:36:34.345795 | orchestrator | + ethertype = "IPv4" 2026-04-17 02:36:34.345801 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345812 | orchestrator | + protocol = "112" 2026-04-17 02:36:34.345821 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345827 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 02:36:34.345833 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 02:36:34.345838 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 02:36:34.345844 | orchestrator | + security_group_id = (known after apply) 2026-04-17 02:36:34.345850 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345857 | orchestrator | } 2026-04-17 02:36:34.345866 | orchestrator | 2026-04-17 02:36:34.345873 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-17 02:36:34.345879 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-17 02:36:34.345885 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.345891 | orchestrator | + description = "management security group" 2026-04-17 02:36:34.345899 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.345929 | orchestrator | + name = "testbed-management" 2026-04-17 02:36:34.345937 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.345950 | orchestrator | + stateful = (known after apply) 2026-04-17 02:36:34.345957 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.345964 | orchestrator | } 2026-04-17 02:36:34.345974 | orchestrator | 2026-04-17 02:36:34.345980 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-17 02:36:34.345987 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-17 02:36:34.345994 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.346001 | orchestrator | + description = "node security group" 2026-04-17 02:36:34.346008 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.346051 | orchestrator | + name = "testbed-node" 2026-04-17 02:36:34.346058 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.346062 | orchestrator | + stateful = (known after apply) 2026-04-17 02:36:34.346066 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.346070 | orchestrator | } 2026-04-17 02:36:34.346078 | orchestrator | 2026-04-17 02:36:34.346082 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-17 02:36:34.346085 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-17 02:36:34.346089 | orchestrator | + all_tags = (known after apply) 2026-04-17 02:36:34.346094 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-17 02:36:34.346098 | orchestrator | + dns_nameservers = [ 2026-04-17 02:36:34.346102 | orchestrator | + "8.8.8.8", 2026-04-17 02:36:34.346107 | orchestrator | + "9.9.9.9", 2026-04-17 02:36:34.346111 | orchestrator | ] 2026-04-17 02:36:34.346115 | orchestrator | + enable_dhcp = true 2026-04-17 02:36:34.346120 | orchestrator | + gateway_ip = (known after apply) 2026-04-17 02:36:34.346124 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.346128 | orchestrator | + ip_version = 4 2026-04-17 02:36:34.346132 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-17 02:36:34.346136 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-17 02:36:34.346140 | orchestrator | + name = "subnet-testbed-management" 2026-04-17 02:36:34.346144 | orchestrator | + network_id = (known after apply) 2026-04-17 02:36:34.346149 | orchestrator | + no_gateway = false 2026-04-17 02:36:34.346153 | orchestrator | + region = (known after apply) 2026-04-17 02:36:34.346158 | orchestrator | + service_types = (known after apply) 2026-04-17 02:36:34.346169 | orchestrator | + tenant_id = (known after apply) 2026-04-17 02:36:34.346173 | orchestrator | 2026-04-17 02:36:34.346177 | orchestrator | + allocation_pool { 2026-04-17 02:36:34.346182 | orchestrator | + end = "192.168.31.250" 2026-04-17 02:36:34.346185 | orchestrator | + start = "192.168.31.200" 2026-04-17 02:36:34.346189 | orchestrator | } 2026-04-17 02:36:34.346193 | orchestrator | } 2026-04-17 02:36:34.346197 | orchestrator | 2026-04-17 02:36:34.346201 | orchestrator | # terraform_data.image will be created 2026-04-17 02:36:34.346204 | orchestrator | + resource "terraform_data" "image" { 2026-04-17 02:36:34.346208 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.346213 | orchestrator | + input = "Ubuntu 24.04" 2026-04-17 02:36:34.346217 | orchestrator | + output = (known after apply) 2026-04-17 02:36:34.346220 | orchestrator | } 2026-04-17 02:36:34.346227 | orchestrator | 2026-04-17 02:36:34.346232 | orchestrator | # terraform_data.image_node will be created 2026-04-17 02:36:34.346235 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-17 02:36:34.346239 | orchestrator | + id = (known after apply) 2026-04-17 02:36:34.346244 | orchestrator | + input = "Ubuntu 24.04" 2026-04-17 02:36:34.346247 | orchestrator | + output = (known after apply) 2026-04-17 02:36:34.346251 | orchestrator | } 2026-04-17 02:36:34.346255 | orchestrator | 2026-04-17 02:36:34.346259 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-17 02:36:34.346263 | orchestrator | 2026-04-17 02:36:34.346267 | orchestrator | Changes to Outputs: 2026-04-17 02:36:34.346271 | orchestrator | + manager_address = (sensitive value) 2026-04-17 02:36:34.346275 | orchestrator | + private_key = (sensitive value) 2026-04-17 02:36:34.524504 | orchestrator | terraform_data.image_node: Creating... 2026-04-17 02:36:34.598680 | orchestrator | terraform_data.image: Creating... 2026-04-17 02:36:34.599240 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=83c2dd24-3044-7076-f734-c33bb334b472] 2026-04-17 02:36:34.599401 | orchestrator | terraform_data.image: Creation complete after 0s [id=b876cc60-be58-95ce-a114-bf23a1a6fe06] 2026-04-17 02:36:34.620905 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-17 02:36:34.620993 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-17 02:36:34.627356 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-17 02:36:34.627440 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-17 02:36:34.631321 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-17 02:36:34.631403 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-17 02:36:34.633764 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-17 02:36:34.633924 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-17 02:36:34.634053 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-17 02:36:34.636616 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-17 02:36:35.096084 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-17 02:36:35.099936 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-17 02:36:35.113596 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-17 02:36:35.119823 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-17 02:36:35.136922 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-17 02:36:35.146162 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-17 02:36:35.640307 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=d4f69688-a199-4163-87da-acede4aab297] 2026-04-17 02:36:35.644758 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-17 02:36:38.260157 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=0790345e-708b-44d5-b129-73ff7ecdfb8b] 2026-04-17 02:36:38.267198 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac] 2026-04-17 02:36:38.271341 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=c054ea69-870b-4e6c-a28f-b4f3aaa6484b] 2026-04-17 02:36:38.274693 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=cdcd9064-7955-4761-96c4-269b5aa6d784] 2026-04-17 02:36:38.277079 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-17 02:36:38.277142 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-17 02:36:38.278557 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-17 02:36:38.280481 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-17 02:36:38.283506 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4] 2026-04-17 02:36:38.289457 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=193d71a8-114c-4752-adc0-dee4f1d71a96] 2026-04-17 02:36:38.293070 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-17 02:36:38.299302 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-17 02:36:38.364847 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=348c4a49-80d1-4817-b52d-126919837098] 2026-04-17 02:36:38.367165 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=243e8c65-8f34-4fed-aca0-50c577764c9c] 2026-04-17 02:36:38.384225 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-17 02:36:38.385423 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-17 02:36:38.391637 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=0ba1f4f500648841f2e04214160c227600fac4b6] 2026-04-17 02:36:38.391698 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=8ab95973-5989-4e6f-8d83-877ad6e28134] 2026-04-17 02:36:38.395413 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=77b33fc9415a6c9a09a9772f3a58965b102bc942] 2026-04-17 02:36:38.398512 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-17 02:36:38.979113 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=11ed6889-50a7-45eb-8f5f-b49aa967e3d6] 2026-04-17 02:36:39.211594 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=db71f17b-6275-4d01-a4ec-6c06420eb04c] 2026-04-17 02:36:39.218482 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-17 02:36:41.651169 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=41525a0f-b2ac-45bd-994e-16d35250beaa] 2026-04-17 02:36:41.687348 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=510ba09c-6639-45c5-b5d5-17f7dd37831d] 2026-04-17 02:36:41.696540 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=1d6df01d-73bc-4a8f-b4ef-36e98f006fb7] 2026-04-17 02:36:41.711704 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=60cf27b4-7c66-4d7c-95df-912b136ea49d] 2026-04-17 02:36:41.719999 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=b9d69c97-6a14-4810-858c-efad7be3f87e] 2026-04-17 02:36:41.737582 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=fc59f804-1091-4440-a733-689672c4390d] 2026-04-17 02:36:42.310569 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=b4687903-568c-4e89-a1d1-5a23def1ba5f] 2026-04-17 02:36:42.315320 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-17 02:36:42.317045 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-17 02:36:42.318940 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-17 02:36:42.520875 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=98375849-9ad4-4603-880f-fd6a1f963d40] 2026-04-17 02:36:42.537641 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-17 02:36:42.538203 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-17 02:36:42.538248 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-17 02:36:42.542408 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-17 02:36:42.542598 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-17 02:36:42.543553 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-17 02:36:42.596736 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=730e659e-ee0f-4e15-9699-c9d3800cd6cf] 2026-04-17 02:36:42.603927 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-17 02:36:42.604513 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-17 02:36:42.606300 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-17 02:36:42.762368 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=d521142c-24c2-4c90-830d-c1a4bcacb693] 2026-04-17 02:36:42.764364 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=0d0f73f1-ed47-4197-a352-597841c0a1d8] 2026-04-17 02:36:42.767278 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-17 02:36:42.771282 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-17 02:36:42.904606 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=9bc66c62-6cb2-4359-a368-8ede9e7064ea] 2026-04-17 02:36:42.917008 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-17 02:36:42.990898 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=56c0f52a-3da0-416d-8a1c-11ac34783390] 2026-04-17 02:36:43.004996 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-17 02:36:43.059090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=814c176f-4ec9-437a-860b-061f79a5b615] 2026-04-17 02:36:43.067842 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-17 02:36:43.184190 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=1c27ec21-0f48-4fd1-9b2c-ddd33e8b294f] 2026-04-17 02:36:43.193968 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-17 02:36:43.225064 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=8d10b59c-1433-4567-995a-48d6b66501d3] 2026-04-17 02:36:43.236227 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-17 02:36:43.389944 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=865d71fb-5169-4236-8a65-6bc5963bc291] 2026-04-17 02:36:43.442395 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=603fb9d1-0e53-429a-af8e-eb93cf5d18c7] 2026-04-17 02:36:43.573850 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=5ecc1ebb-bab6-4eeb-8ac8-a8cb3b581e25] 2026-04-17 02:36:43.611211 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=eaebda7f-7735-4785-aed7-249b2a22c7bc] 2026-04-17 02:36:43.675409 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=dd198384-8465-4527-9b5a-81aa3b5a0090] 2026-04-17 02:36:43.829938 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=19374b75-781e-4ba7-b0a7-005125eaac88] 2026-04-17 02:36:43.869406 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=821cfc1e-0d01-4c68-ba4b-b079fa2ad78b] 2026-04-17 02:36:43.931428 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=ee4b7ef4-0fb9-4115-aab4-acd00781834f] 2026-04-17 02:36:44.105121 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=079b40f9-8bfb-4a47-905e-ff258adf68ca] 2026-04-17 02:36:45.321112 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=b99567fa-6e2c-42ad-a885-081307373d0b] 2026-04-17 02:36:45.341987 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-17 02:36:45.352715 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-17 02:36:45.355046 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-17 02:36:45.359558 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-17 02:36:45.366117 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-17 02:36:45.373754 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-17 02:36:45.375179 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-17 02:36:46.675631 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=6b690889-0a28-4025-8b7d-ed3f29b60cdd] 2026-04-17 02:36:46.688680 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-17 02:36:46.689011 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-17 02:36:46.689548 | orchestrator | local_file.inventory: Creating... 2026-04-17 02:36:46.692176 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=bcb9748f56ac77b548a79d93ad9e9a198f0ad41c] 2026-04-17 02:36:46.695315 | orchestrator | local_file.inventory: Creation complete after 0s [id=1b27616b2400f2c7509dd4df9900f55e78e70d58] 2026-04-17 02:36:47.486584 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=6b690889-0a28-4025-8b7d-ed3f29b60cdd] 2026-04-17 02:36:55.354167 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-17 02:36:55.355282 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-17 02:36:55.360587 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-17 02:36:55.367002 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-17 02:36:55.376646 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-17 02:36:55.376777 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-17 02:37:05.355368 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-17 02:37:05.355475 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-17 02:37:05.361221 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-17 02:37:05.367519 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-17 02:37:05.377806 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-17 02:37:05.377917 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-17 02:37:05.829629 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=9c2a145e-3e38-4d06-adba-1cd1dedfa5ef] 2026-04-17 02:37:05.838871 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=a2f42fa4-ef2f-4e94-9e0c-550dbea00da1] 2026-04-17 02:37:05.875266 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=998ca04a-9638-4918-858b-2f50b0e94651] 2026-04-17 02:37:05.885382 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=9516d90f-bc24-446a-891a-8e14b954855e] 2026-04-17 02:37:06.193020 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=04bf3f53-1268-4b1d-a5d2-9212898bf5d2] 2026-04-17 02:37:15.378226 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-17 02:37:16.431944 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=cd4d04a5-15c9-4c42-b350-42d0898b5b2d] 2026-04-17 02:37:16.450978 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-17 02:37:16.465300 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8587802819122594049] 2026-04-17 02:37:16.470566 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-17 02:37:16.472168 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-17 02:37:16.476802 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-17 02:37:16.478470 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-17 02:37:16.479204 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-17 02:37:16.480194 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-17 02:37:16.480497 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-17 02:37:16.494354 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-17 02:37:16.501056 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-17 02:37:16.507929 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-17 02:37:19.887626 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=9516d90f-bc24-446a-891a-8e14b954855e/c054ea69-870b-4e6c-a28f-b4f3aaa6484b] 2026-04-17 02:37:19.897286 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=04bf3f53-1268-4b1d-a5d2-9212898bf5d2/cdcd9064-7955-4761-96c4-269b5aa6d784] 2026-04-17 02:37:19.907648 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=9516d90f-bc24-446a-891a-8e14b954855e/348c4a49-80d1-4817-b52d-126919837098] 2026-04-17 02:37:19.916716 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=04bf3f53-1268-4b1d-a5d2-9212898bf5d2/193d71a8-114c-4752-adc0-dee4f1d71a96] 2026-04-17 02:37:19.936735 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=a2f42fa4-ef2f-4e94-9e0c-550dbea00da1/0790345e-708b-44d5-b129-73ff7ecdfb8b] 2026-04-17 02:37:19.937578 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=a2f42fa4-ef2f-4e94-9e0c-550dbea00da1/8ab95973-5989-4e6f-8d83-877ad6e28134] 2026-04-17 02:37:25.990367 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=9516d90f-bc24-446a-891a-8e14b954855e/243e8c65-8f34-4fed-aca0-50c577764c9c] 2026-04-17 02:37:26.002923 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=04bf3f53-1268-4b1d-a5d2-9212898bf5d2/ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4] 2026-04-17 02:37:26.020853 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=a2f42fa4-ef2f-4e94-9e0c-550dbea00da1/1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac] 2026-04-17 02:37:26.509118 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-17 02:37:36.509271 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-17 02:37:36.962950 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=57e60de7-aa1d-4969-a471-78c402547eb9] 2026-04-17 02:37:36.980312 | orchestrator | 2026-04-17 02:37:36.980487 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-17 02:37:36.980509 | orchestrator | 2026-04-17 02:37:36.980525 | orchestrator | Outputs: 2026-04-17 02:37:36.980551 | orchestrator | 2026-04-17 02:37:36.980582 | orchestrator | manager_address = 2026-04-17 02:37:36.980598 | orchestrator | private_key = 2026-04-17 02:37:37.440101 | orchestrator | ok: Runtime: 0:01:09.681559 2026-04-17 02:37:37.472874 | 2026-04-17 02:37:37.473003 | TASK [Fetch manager address] 2026-04-17 02:37:37.940499 | orchestrator | ok 2026-04-17 02:37:37.952798 | 2026-04-17 02:37:37.952942 | TASK [Set manager_host address] 2026-04-17 02:37:38.036214 | orchestrator | ok 2026-04-17 02:37:38.046392 | 2026-04-17 02:37:38.046544 | LOOP [Update ansible collections] 2026-04-17 02:37:39.015023 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-17 02:37:39.015344 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-17 02:37:39.015412 | orchestrator | Starting galaxy collection install process 2026-04-17 02:37:39.015458 | orchestrator | Process install dependency map 2026-04-17 02:37:39.015500 | orchestrator | Starting collection install process 2026-04-17 02:37:39.015538 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-04-17 02:37:39.015584 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-04-17 02:37:39.015628 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-17 02:37:39.015720 | orchestrator | ok: Item: commons Runtime: 0:00:00.612547 2026-04-17 02:37:39.979519 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-17 02:37:39.979709 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-17 02:37:39.979775 | orchestrator | Starting galaxy collection install process 2026-04-17 02:37:39.979816 | orchestrator | Process install dependency map 2026-04-17 02:37:39.979853 | orchestrator | Starting collection install process 2026-04-17 02:37:39.979888 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-04-17 02:37:39.979923 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-04-17 02:37:39.979975 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-17 02:37:39.980036 | orchestrator | ok: Item: services Runtime: 0:00:00.688968 2026-04-17 02:37:40.002370 | 2026-04-17 02:37:40.002558 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-17 02:37:50.587817 | orchestrator | ok 2026-04-17 02:37:50.597587 | 2026-04-17 02:37:50.597698 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-17 02:38:50.638557 | orchestrator | ok 2026-04-17 02:38:50.657557 | 2026-04-17 02:38:50.657731 | TASK [Fetch manager ssh hostkey] 2026-04-17 02:38:52.238486 | orchestrator | Output suppressed because no_log was given 2026-04-17 02:38:52.254386 | 2026-04-17 02:38:52.254606 | TASK [Get ssh keypair from terraform environment] 2026-04-17 02:38:52.795859 | orchestrator | ok: Runtime: 0:00:00.008942 2026-04-17 02:38:52.813640 | 2026-04-17 02:38:52.813796 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-17 02:38:52.850176 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-17 02:38:52.859760 | 2026-04-17 02:38:52.859881 | TASK [Run manager part 0] 2026-04-17 02:38:53.834553 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-17 02:38:53.884642 | orchestrator | 2026-04-17 02:38:53.884695 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-17 02:38:53.884702 | orchestrator | 2026-04-17 02:38:53.884716 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-17 02:39:45.177558 | orchestrator | ok: [testbed-manager] 2026-04-17 02:39:45.177636 | orchestrator | 2026-04-17 02:39:45.177654 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-17 02:39:45.177664 | orchestrator | 2026-04-17 02:39:45.177672 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 02:39:47.057123 | orchestrator | ok: [testbed-manager] 2026-04-17 02:39:47.057188 | orchestrator | 2026-04-17 02:39:47.057199 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-17 02:39:47.706918 | orchestrator | ok: [testbed-manager] 2026-04-17 02:39:47.706972 | orchestrator | 2026-04-17 02:39:47.706981 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-17 02:39:47.760811 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:39:47.760867 | orchestrator | 2026-04-17 02:39:47.760879 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-17 02:39:47.796430 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:39:47.796491 | orchestrator | 2026-04-17 02:39:47.796504 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-17 02:39:47.834830 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:39:47.834878 | orchestrator | 2026-04-17 02:39:47.834883 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-17 02:39:48.642796 | orchestrator | changed: [testbed-manager] 2026-04-17 02:39:48.642859 | orchestrator | 2026-04-17 02:39:48.642870 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-17 02:43:14.540265 | orchestrator | changed: [testbed-manager] 2026-04-17 02:43:14.540378 | orchestrator | 2026-04-17 02:43:14.540400 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-17 02:47:02.471078 | orchestrator | changed: [testbed-manager] 2026-04-17 02:47:02.471198 | orchestrator | 2026-04-17 02:47:02.471298 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-17 02:47:28.463855 | orchestrator | changed: [testbed-manager] 2026-04-17 02:47:28.463896 | orchestrator | 2026-04-17 02:47:28.463904 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-17 02:47:37.701296 | orchestrator | changed: [testbed-manager] 2026-04-17 02:47:37.701353 | orchestrator | 2026-04-17 02:47:37.701361 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-17 02:47:37.738535 | orchestrator | ok: [testbed-manager] 2026-04-17 02:47:37.738580 | orchestrator | 2026-04-17 02:47:37.738589 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-17 02:47:38.459125 | orchestrator | ok: [testbed-manager] 2026-04-17 02:47:38.459261 | orchestrator | 2026-04-17 02:47:38.459275 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-17 02:47:39.237364 | orchestrator | changed: [testbed-manager] 2026-04-17 02:47:39.237403 | orchestrator | 2026-04-17 02:47:39.237411 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-17 02:47:45.429317 | orchestrator | changed: [testbed-manager] 2026-04-17 02:47:45.429357 | orchestrator | 2026-04-17 02:47:45.429362 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-17 02:47:51.294539 | orchestrator | changed: [testbed-manager] 2026-04-17 02:47:51.294616 | orchestrator | 2026-04-17 02:47:51.294627 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-17 02:47:54.027958 | orchestrator | changed: [testbed-manager] 2026-04-17 02:47:54.028005 | orchestrator | 2026-04-17 02:47:54.028014 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-17 02:47:55.805038 | orchestrator | changed: [testbed-manager] 2026-04-17 02:47:55.805115 | orchestrator | 2026-04-17 02:47:55.805123 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-17 02:47:56.848885 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-17 02:47:56.848955 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-17 02:47:56.848963 | orchestrator | 2026-04-17 02:47:56.848975 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-17 02:47:56.890321 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-17 02:47:56.890384 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-17 02:47:56.890391 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-17 02:47:56.890398 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-17 02:48:00.049129 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-17 02:48:00.049354 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-17 02:48:00.049369 | orchestrator | 2026-04-17 02:48:00.049377 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-17 02:48:00.559290 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:00.559366 | orchestrator | 2026-04-17 02:48:00.559375 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-17 02:48:20.842596 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-17 02:48:20.842685 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-17 02:48:20.842698 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-17 02:48:20.842703 | orchestrator | 2026-04-17 02:48:20.842708 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-17 02:48:23.079792 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-17 02:48:23.079834 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-17 02:48:23.079839 | orchestrator | 2026-04-17 02:48:23.079846 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-17 02:48:23.079851 | orchestrator | 2026-04-17 02:48:23.079856 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 02:48:24.449518 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:24.449556 | orchestrator | 2026-04-17 02:48:24.449562 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-17 02:48:24.497729 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:24.497773 | orchestrator | 2026-04-17 02:48:24.497782 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-17 02:48:24.577819 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:24.577872 | orchestrator | 2026-04-17 02:48:24.577885 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-17 02:48:25.340559 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:25.340624 | orchestrator | 2026-04-17 02:48:25.340637 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-17 02:48:26.007426 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:26.007477 | orchestrator | 2026-04-17 02:48:26.007486 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-17 02:48:27.285289 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-17 02:48:27.285362 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-17 02:48:27.285370 | orchestrator | 2026-04-17 02:48:27.285377 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-17 02:48:28.676369 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:28.676456 | orchestrator | 2026-04-17 02:48:28.676466 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-17 02:48:30.356950 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 02:48:30.357021 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-17 02:48:30.357048 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-17 02:48:30.357054 | orchestrator | 2026-04-17 02:48:30.357061 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-17 02:48:30.409912 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:30.409980 | orchestrator | 2026-04-17 02:48:30.409988 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-17 02:48:30.482940 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:30.482992 | orchestrator | 2026-04-17 02:48:30.483001 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-17 02:48:30.997675 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:30.997776 | orchestrator | 2026-04-17 02:48:30.997790 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-17 02:48:31.057429 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:31.057507 | orchestrator | 2026-04-17 02:48:31.057517 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-17 02:48:31.928252 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 02:48:31.929089 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:31.929120 | orchestrator | 2026-04-17 02:48:31.929129 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-17 02:48:31.957961 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:31.958003 | orchestrator | 2026-04-17 02:48:31.958010 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-17 02:48:31.989421 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:31.989478 | orchestrator | 2026-04-17 02:48:31.989484 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-17 02:48:32.019813 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:32.019875 | orchestrator | 2026-04-17 02:48:32.019881 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-17 02:48:32.096148 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:32.096237 | orchestrator | 2026-04-17 02:48:32.096246 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-17 02:48:32.796451 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:32.796538 | orchestrator | 2026-04-17 02:48:32.796551 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-17 02:48:32.796580 | orchestrator | 2026-04-17 02:48:32.796614 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 02:48:34.144733 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:34.144799 | orchestrator | 2026-04-17 02:48:34.144805 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-17 02:48:35.050446 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:35.050498 | orchestrator | 2026-04-17 02:48:35.050505 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 02:48:35.050511 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-17 02:48:35.050516 | orchestrator | 2026-04-17 02:48:35.256132 | orchestrator | ok: Runtime: 0:09:42.014993 2026-04-17 02:48:35.268136 | 2026-04-17 02:48:35.268259 | TASK [Point out that the log in on the manager is now possible] 2026-04-17 02:48:35.311871 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-17 02:48:35.320493 | 2026-04-17 02:48:35.320634 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-17 02:48:35.353690 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-17 02:48:35.361494 | 2026-04-17 02:48:35.361607 | TASK [Run manager part 1 + 2] 2026-04-17 02:48:36.281081 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-17 02:48:36.356136 | orchestrator | 2026-04-17 02:48:36.356207 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-17 02:48:36.356215 | orchestrator | 2026-04-17 02:48:36.356229 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 02:48:38.809658 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:38.809719 | orchestrator | 2026-04-17 02:48:38.809747 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-17 02:48:38.838370 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:38.838430 | orchestrator | 2026-04-17 02:48:38.838444 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-17 02:48:38.872557 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:38.872606 | orchestrator | 2026-04-17 02:48:38.872614 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-17 02:48:38.913278 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:38.913334 | orchestrator | 2026-04-17 02:48:38.913346 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-17 02:48:38.984164 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:38.984243 | orchestrator | 2026-04-17 02:48:38.984254 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-17 02:48:39.054863 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:39.054910 | orchestrator | 2026-04-17 02:48:39.054917 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-17 02:48:39.096716 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-17 02:48:39.096767 | orchestrator | 2026-04-17 02:48:39.096773 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-17 02:48:39.811483 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:39.811531 | orchestrator | 2026-04-17 02:48:39.811539 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-17 02:48:39.855092 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:39.855151 | orchestrator | 2026-04-17 02:48:39.855159 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-17 02:48:41.278574 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:41.278663 | orchestrator | 2026-04-17 02:48:41.278676 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-17 02:48:41.824301 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:41.824450 | orchestrator | 2026-04-17 02:48:41.824479 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-17 02:48:43.029351 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:43.029448 | orchestrator | 2026-04-17 02:48:43.029464 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-17 02:48:58.502300 | orchestrator | changed: [testbed-manager] 2026-04-17 02:48:58.502415 | orchestrator | 2026-04-17 02:48:58.502450 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-17 02:48:59.200395 | orchestrator | ok: [testbed-manager] 2026-04-17 02:48:59.200431 | orchestrator | 2026-04-17 02:48:59.200438 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-17 02:48:59.259486 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:48:59.259542 | orchestrator | 2026-04-17 02:48:59.259556 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-17 02:49:00.135813 | orchestrator | changed: [testbed-manager] 2026-04-17 02:49:00.135886 | orchestrator | 2026-04-17 02:49:00.135896 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-17 02:49:00.987330 | orchestrator | changed: [testbed-manager] 2026-04-17 02:49:00.988430 | orchestrator | 2026-04-17 02:49:00.988511 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-17 02:49:01.502180 | orchestrator | changed: [testbed-manager] 2026-04-17 02:49:01.502262 | orchestrator | 2026-04-17 02:49:01.502270 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-17 02:49:01.542692 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-17 02:49:01.542826 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-17 02:49:01.542844 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-17 02:49:01.542855 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-17 02:49:03.992685 | orchestrator | changed: [testbed-manager] 2026-04-17 02:49:03.992762 | orchestrator | 2026-04-17 02:49:03.992776 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-17 02:49:12.055862 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-17 02:49:12.055956 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-17 02:49:12.055969 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-17 02:49:12.055977 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-17 02:49:12.055991 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-17 02:49:12.055998 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-17 02:49:12.056005 | orchestrator | 2026-04-17 02:49:12.056013 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-17 02:49:13.030831 | orchestrator | changed: [testbed-manager] 2026-04-17 02:49:13.030901 | orchestrator | 2026-04-17 02:49:13.030911 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-17 02:49:15.982836 | orchestrator | changed: [testbed-manager] 2026-04-17 02:49:15.982912 | orchestrator | 2026-04-17 02:49:15.982924 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-17 02:49:16.026626 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:49:16.026705 | orchestrator | 2026-04-17 02:49:16.026715 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-17 02:50:52.630286 | orchestrator | changed: [testbed-manager] 2026-04-17 02:50:52.630388 | orchestrator | 2026-04-17 02:50:52.630405 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-17 02:50:53.786900 | orchestrator | ok: [testbed-manager] 2026-04-17 02:50:53.786951 | orchestrator | 2026-04-17 02:50:53.786961 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 02:50:53.786970 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-17 02:50:53.786976 | orchestrator | 2026-04-17 02:50:53.995116 | orchestrator | ok: Runtime: 0:02:18.227828 2026-04-17 02:50:54.007856 | 2026-04-17 02:50:54.007992 | TASK [Reboot manager] 2026-04-17 02:50:55.544229 | orchestrator | ok: Runtime: 0:00:00.931235 2026-04-17 02:50:55.561258 | 2026-04-17 02:50:55.561412 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-17 02:51:10.304209 | orchestrator | ok 2026-04-17 02:51:10.314006 | 2026-04-17 02:51:10.314124 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-17 02:52:10.363996 | orchestrator | ok 2026-04-17 02:52:10.373249 | 2026-04-17 02:52:10.373368 | TASK [Deploy manager + bootstrap nodes] 2026-04-17 02:52:12.711133 | orchestrator | 2026-04-17 02:52:12.711414 | orchestrator | # DEPLOY MANAGER 2026-04-17 02:52:12.711452 | orchestrator | 2026-04-17 02:52:12.711473 | orchestrator | + set -e 2026-04-17 02:52:12.711493 | orchestrator | + echo 2026-04-17 02:52:12.711514 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-17 02:52:12.711541 | orchestrator | + echo 2026-04-17 02:52:12.711607 | orchestrator | + cat /opt/manager-vars.sh 2026-04-17 02:52:12.714448 | orchestrator | export NUMBER_OF_NODES=6 2026-04-17 02:52:12.714516 | orchestrator | 2026-04-17 02:52:12.714526 | orchestrator | export CEPH_VERSION=reef 2026-04-17 02:52:12.714535 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-17 02:52:12.714543 | orchestrator | export MANAGER_VERSION=9.5.0 2026-04-17 02:52:12.714561 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-17 02:52:12.714567 | orchestrator | 2026-04-17 02:52:12.714577 | orchestrator | export ARA=false 2026-04-17 02:52:12.714583 | orchestrator | export DEPLOY_MODE=manager 2026-04-17 02:52:12.714592 | orchestrator | export TEMPEST=false 2026-04-17 02:52:12.714599 | orchestrator | export IS_ZUUL=true 2026-04-17 02:52:12.714605 | orchestrator | 2026-04-17 02:52:12.714615 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 02:52:12.714621 | orchestrator | export EXTERNAL_API=false 2026-04-17 02:52:12.714626 | orchestrator | 2026-04-17 02:52:12.714632 | orchestrator | export IMAGE_USER=ubuntu 2026-04-17 02:52:12.714640 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-17 02:52:12.714645 | orchestrator | 2026-04-17 02:52:12.714651 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-17 02:52:12.714665 | orchestrator | 2026-04-17 02:52:12.714670 | orchestrator | + echo 2026-04-17 02:52:12.714677 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 02:52:12.715606 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 02:52:12.715677 | orchestrator | ++ INTERACTIVE=false 2026-04-17 02:52:12.715693 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 02:52:12.715705 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 02:52:12.715713 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 02:52:12.715721 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 02:52:12.715737 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 02:52:12.715745 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 02:52:12.715752 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 02:52:12.715768 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 02:52:12.715777 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 02:52:12.715785 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 02:52:12.715793 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 02:52:12.715801 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 02:52:12.715822 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 02:52:12.715831 | orchestrator | ++ export ARA=false 2026-04-17 02:52:12.715839 | orchestrator | ++ ARA=false 2026-04-17 02:52:12.715847 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 02:52:12.715854 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 02:52:12.715863 | orchestrator | ++ export TEMPEST=false 2026-04-17 02:52:12.715871 | orchestrator | ++ TEMPEST=false 2026-04-17 02:52:12.715879 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 02:52:12.715887 | orchestrator | ++ IS_ZUUL=true 2026-04-17 02:52:12.715894 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 02:52:12.715902 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 02:52:12.715910 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 02:52:12.715918 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 02:52:12.715926 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 02:52:12.715934 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 02:52:12.715943 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 02:52:12.715951 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 02:52:12.715959 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 02:52:12.715968 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 02:52:12.715981 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-17 02:52:12.768722 | orchestrator | + docker version 2026-04-17 02:52:12.886842 | orchestrator | Client: Docker Engine - Community 2026-04-17 02:52:12.886945 | orchestrator | Version: 27.5.1 2026-04-17 02:52:12.886964 | orchestrator | API version: 1.47 2026-04-17 02:52:12.886969 | orchestrator | Go version: go1.22.11 2026-04-17 02:52:12.886973 | orchestrator | Git commit: 9f9e405 2026-04-17 02:52:12.886982 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-17 02:52:12.886991 | orchestrator | OS/Arch: linux/amd64 2026-04-17 02:52:12.886997 | orchestrator | Context: default 2026-04-17 02:52:12.887005 | orchestrator | 2026-04-17 02:52:12.887014 | orchestrator | Server: Docker Engine - Community 2026-04-17 02:52:12.887031 | orchestrator | Engine: 2026-04-17 02:52:12.887040 | orchestrator | Version: 27.5.1 2026-04-17 02:52:12.887046 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-17 02:52:12.887079 | orchestrator | Go version: go1.22.11 2026-04-17 02:52:12.887087 | orchestrator | Git commit: 4c9b3b0 2026-04-17 02:52:12.887095 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-17 02:52:12.887101 | orchestrator | OS/Arch: linux/amd64 2026-04-17 02:52:12.887253 | orchestrator | Experimental: false 2026-04-17 02:52:12.887266 | orchestrator | containerd: 2026-04-17 02:52:12.887462 | orchestrator | Version: v2.2.3 2026-04-17 02:52:12.887479 | orchestrator | GitCommit: 77c84241c7cbdd9b4eca2591793e3d4f4317c590 2026-04-17 02:52:12.887487 | orchestrator | runc: 2026-04-17 02:52:12.887497 | orchestrator | Version: 1.3.5 2026-04-17 02:52:12.887504 | orchestrator | GitCommit: v1.3.5-0-g488fc13e 2026-04-17 02:52:12.887511 | orchestrator | docker-init: 2026-04-17 02:52:12.887619 | orchestrator | Version: 0.19.0 2026-04-17 02:52:12.887626 | orchestrator | GitCommit: de40ad0 2026-04-17 02:52:12.891429 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-17 02:52:12.900937 | orchestrator | + set -e 2026-04-17 02:52:12.901011 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 02:52:12.901021 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 02:52:12.901028 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 02:52:12.901034 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 02:52:12.901040 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 02:52:12.901044 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 02:52:12.901049 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 02:52:12.901054 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 02:52:12.901060 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 02:52:12.901066 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 02:52:12.901073 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 02:52:12.901079 | orchestrator | ++ export ARA=false 2026-04-17 02:52:12.901086 | orchestrator | ++ ARA=false 2026-04-17 02:52:12.901092 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 02:52:12.901099 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 02:52:12.901106 | orchestrator | ++ export TEMPEST=false 2026-04-17 02:52:12.901113 | orchestrator | ++ TEMPEST=false 2026-04-17 02:52:12.901119 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 02:52:12.901128 | orchestrator | ++ IS_ZUUL=true 2026-04-17 02:52:12.901135 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 02:52:12.901141 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 02:52:12.901148 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 02:52:12.901154 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 02:52:12.901160 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 02:52:12.901166 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 02:52:12.901172 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 02:52:12.901178 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 02:52:12.901196 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 02:52:12.901200 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 02:52:12.901204 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 02:52:12.901214 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 02:52:12.901218 | orchestrator | ++ INTERACTIVE=false 2026-04-17 02:52:12.901222 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 02:52:12.901229 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 02:52:12.901233 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-17 02:52:12.901244 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-04-17 02:52:12.908675 | orchestrator | + set -e 2026-04-17 02:52:12.908767 | orchestrator | + VERSION=9.5.0 2026-04-17 02:52:12.908782 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-17 02:52:12.914286 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-17 02:52:12.914353 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-17 02:52:12.916308 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-17 02:52:12.918911 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-17 02:52:12.926269 | orchestrator | /opt/configuration ~ 2026-04-17 02:52:12.926331 | orchestrator | + set -e 2026-04-17 02:52:12.926340 | orchestrator | + pushd /opt/configuration 2026-04-17 02:52:12.926348 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 02:52:12.927430 | orchestrator | + source /opt/venv/bin/activate 2026-04-17 02:52:12.928502 | orchestrator | ++ deactivate nondestructive 2026-04-17 02:52:12.928577 | orchestrator | ++ '[' -n '' ']' 2026-04-17 02:52:12.928598 | orchestrator | ++ '[' -n '' ']' 2026-04-17 02:52:12.928629 | orchestrator | ++ hash -r 2026-04-17 02:52:12.928637 | orchestrator | ++ '[' -n '' ']' 2026-04-17 02:52:12.928645 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-17 02:52:12.928652 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-17 02:52:12.928660 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-17 02:52:12.928883 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-17 02:52:12.928901 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-17 02:52:12.928915 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-17 02:52:12.928923 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-17 02:52:12.928931 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 02:52:12.928939 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 02:52:12.928951 | orchestrator | ++ export PATH 2026-04-17 02:52:12.928960 | orchestrator | ++ '[' -n '' ']' 2026-04-17 02:52:12.928972 | orchestrator | ++ '[' -z '' ']' 2026-04-17 02:52:12.928982 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-17 02:52:12.929099 | orchestrator | ++ PS1='(venv) ' 2026-04-17 02:52:12.929111 | orchestrator | ++ export PS1 2026-04-17 02:52:12.929146 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-17 02:52:12.929153 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-17 02:52:12.929164 | orchestrator | ++ hash -r 2026-04-17 02:52:12.929172 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-17 02:52:13.831246 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-17 02:52:13.831627 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-17 02:52:13.832756 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-17 02:52:13.833874 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-17 02:52:13.834861 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.1) 2026-04-17 02:52:13.844476 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-17 02:52:13.845663 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-17 02:52:13.846554 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-17 02:52:13.847620 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-17 02:52:13.870725 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-17 02:52:13.871751 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-17 02:52:13.873315 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-17 02:52:13.874432 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-17 02:52:13.877954 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-17 02:52:14.045141 | orchestrator | ++ which gilt 2026-04-17 02:52:14.048913 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-17 02:52:14.048996 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-17 02:52:14.240799 | orchestrator | osism.cfg-generics: 2026-04-17 02:52:14.367490 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-17 02:52:14.367600 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-17 02:52:14.367866 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-17 02:52:14.367942 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-17 02:52:15.167821 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-17 02:52:15.175950 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-17 02:52:15.526118 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-17 02:52:15.564275 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 02:52:15.564363 | orchestrator | + deactivate 2026-04-17 02:52:15.564374 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-17 02:52:15.564384 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 02:52:15.564392 | orchestrator | + export PATH 2026-04-17 02:52:15.564400 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-17 02:52:15.564409 | orchestrator | + '[' -n '' ']' 2026-04-17 02:52:15.564418 | orchestrator | + hash -r 2026-04-17 02:52:15.564425 | orchestrator | + '[' -n '' ']' 2026-04-17 02:52:15.564433 | orchestrator | + unset VIRTUAL_ENV 2026-04-17 02:52:15.564440 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-17 02:52:15.564448 | orchestrator | ~ 2026-04-17 02:52:15.564455 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-17 02:52:15.564463 | orchestrator | + unset -f deactivate 2026-04-17 02:52:15.564471 | orchestrator | + popd 2026-04-17 02:52:15.565870 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-17 02:52:15.565934 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-17 02:52:15.566195 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-17 02:52:15.619842 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 02:52:15.619928 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-17 02:52:15.619939 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-17 02:52:15.621007 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-17 02:52:15.666919 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 02:52:15.667511 | orchestrator | ++ semver 2024.2 2025.1 2026-04-17 02:52:15.719427 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 02:52:15.719527 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-17 02:52:15.809377 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 02:52:15.809520 | orchestrator | + source /opt/venv/bin/activate 2026-04-17 02:52:15.809545 | orchestrator | ++ deactivate nondestructive 2026-04-17 02:52:15.809572 | orchestrator | ++ '[' -n '' ']' 2026-04-17 02:52:15.809591 | orchestrator | ++ '[' -n '' ']' 2026-04-17 02:52:15.809610 | orchestrator | ++ hash -r 2026-04-17 02:52:15.809634 | orchestrator | ++ '[' -n '' ']' 2026-04-17 02:52:15.809651 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-17 02:52:15.809668 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-17 02:52:15.809684 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-17 02:52:15.809719 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-17 02:52:15.809736 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-17 02:52:15.809768 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-17 02:52:15.809793 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-17 02:52:15.809810 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 02:52:15.809852 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 02:52:15.809871 | orchestrator | ++ export PATH 2026-04-17 02:52:15.809887 | orchestrator | ++ '[' -n '' ']' 2026-04-17 02:52:15.809903 | orchestrator | ++ '[' -z '' ']' 2026-04-17 02:52:15.809919 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-17 02:52:15.809937 | orchestrator | ++ PS1='(venv) ' 2026-04-17 02:52:15.809959 | orchestrator | ++ export PS1 2026-04-17 02:52:15.809976 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-17 02:52:15.809992 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-17 02:52:15.810007 | orchestrator | ++ hash -r 2026-04-17 02:52:15.810109 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-17 02:52:16.700744 | orchestrator | 2026-04-17 02:52:16.700868 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-17 02:52:16.700886 | orchestrator | 2026-04-17 02:52:16.700898 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-17 02:52:17.218778 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:17.218893 | orchestrator | 2026-04-17 02:52:17.218912 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-17 02:52:18.106258 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:18.106437 | orchestrator | 2026-04-17 02:52:18.106456 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-17 02:52:18.106468 | orchestrator | 2026-04-17 02:52:18.106482 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 02:52:20.401714 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:20.401811 | orchestrator | 2026-04-17 02:52:20.401822 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-17 02:52:20.454589 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:20.454660 | orchestrator | 2026-04-17 02:52:20.454666 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-17 02:52:20.924969 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:20.925051 | orchestrator | 2026-04-17 02:52:20.925065 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-17 02:52:20.968555 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:52:20.968634 | orchestrator | 2026-04-17 02:52:20.968644 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-17 02:52:21.299843 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:21.299926 | orchestrator | 2026-04-17 02:52:21.299936 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-17 02:52:21.627976 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:21.628058 | orchestrator | 2026-04-17 02:52:21.628071 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-17 02:52:21.749408 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:52:21.749515 | orchestrator | 2026-04-17 02:52:21.749532 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-17 02:52:21.749545 | orchestrator | 2026-04-17 02:52:21.749556 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 02:52:23.484546 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:23.484640 | orchestrator | 2026-04-17 02:52:23.484652 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-17 02:52:23.578557 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-17 02:52:23.578650 | orchestrator | 2026-04-17 02:52:23.578664 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-17 02:52:23.642380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-17 02:52:23.642504 | orchestrator | 2026-04-17 02:52:23.642529 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-17 02:52:24.748989 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-17 02:52:24.749104 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-17 02:52:24.749124 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-17 02:52:24.749166 | orchestrator | 2026-04-17 02:52:24.749214 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-17 02:52:26.566357 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-17 02:52:26.566444 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-17 02:52:26.566453 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-17 02:52:26.566461 | orchestrator | 2026-04-17 02:52:26.566469 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-17 02:52:27.212500 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 02:52:27.212633 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:27.212661 | orchestrator | 2026-04-17 02:52:27.212679 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-17 02:52:27.840011 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 02:52:28.070591 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:28.070658 | orchestrator | 2026-04-17 02:52:28.070672 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-17 02:52:28.070702 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:52:28.070713 | orchestrator | 2026-04-17 02:52:28.070722 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-17 02:52:28.249108 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:28.249271 | orchestrator | 2026-04-17 02:52:28.249286 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-17 02:52:28.319141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-17 02:52:28.319236 | orchestrator | 2026-04-17 02:52:28.319245 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-17 02:52:29.391641 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:29.391744 | orchestrator | 2026-04-17 02:52:29.391761 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-17 02:52:30.185133 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:30.185270 | orchestrator | 2026-04-17 02:52:30.185283 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-17 02:52:41.170881 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:41.171015 | orchestrator | 2026-04-17 02:52:41.171043 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-17 02:52:41.257671 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:52:41.257787 | orchestrator | 2026-04-17 02:52:41.257865 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-17 02:52:41.257894 | orchestrator | 2026-04-17 02:52:41.257911 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 02:52:43.150556 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:43.150637 | orchestrator | 2026-04-17 02:52:43.150646 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-17 02:52:43.261077 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-17 02:52:43.261162 | orchestrator | 2026-04-17 02:52:43.261292 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-17 02:52:43.331158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 02:52:43.331306 | orchestrator | 2026-04-17 02:52:43.331322 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-17 02:52:45.734237 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:45.734343 | orchestrator | 2026-04-17 02:52:45.734358 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-17 02:52:45.771029 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:45.771113 | orchestrator | 2026-04-17 02:52:45.771125 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-17 02:52:45.897113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-17 02:52:45.897234 | orchestrator | 2026-04-17 02:52:45.897245 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-17 02:52:48.697086 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-17 02:52:48.697286 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-17 02:52:48.697302 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-17 02:52:48.697311 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-17 02:52:48.697319 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-17 02:52:48.697327 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-17 02:52:48.697334 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-17 02:52:48.697342 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-17 02:52:48.697349 | orchestrator | 2026-04-17 02:52:48.697358 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-17 02:52:49.319241 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:49.319319 | orchestrator | 2026-04-17 02:52:49.319331 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-17 02:52:49.940677 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:49.940752 | orchestrator | 2026-04-17 02:52:49.940760 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-17 02:52:50.022974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-17 02:52:50.023072 | orchestrator | 2026-04-17 02:52:50.023114 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-17 02:52:51.235037 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-17 02:52:51.235142 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-17 02:52:51.235151 | orchestrator | 2026-04-17 02:52:51.235159 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-17 02:52:51.843536 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:51.843631 | orchestrator | 2026-04-17 02:52:51.843646 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-17 02:52:51.891210 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:52:51.891289 | orchestrator | 2026-04-17 02:52:51.891299 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-17 02:52:51.965147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-17 02:52:51.965311 | orchestrator | 2026-04-17 02:52:51.965326 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-17 02:52:52.573815 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:52.573904 | orchestrator | 2026-04-17 02:52:52.573915 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-17 02:52:52.644847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-17 02:52:52.644937 | orchestrator | 2026-04-17 02:52:52.644950 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-17 02:52:53.995431 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 02:52:53.995542 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 02:52:53.995557 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:53.995570 | orchestrator | 2026-04-17 02:52:53.995594 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-17 02:52:54.603330 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:54.603440 | orchestrator | 2026-04-17 02:52:54.603465 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-17 02:52:54.649832 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:52:54.649934 | orchestrator | 2026-04-17 02:52:54.649947 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-17 02:52:54.731500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-17 02:52:54.731592 | orchestrator | 2026-04-17 02:52:54.731608 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-17 02:52:55.243731 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:55.243821 | orchestrator | 2026-04-17 02:52:55.243829 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-17 02:52:55.644398 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:55.644507 | orchestrator | 2026-04-17 02:52:55.644527 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-17 02:52:56.871015 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-17 02:52:56.871095 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-17 02:52:56.871102 | orchestrator | 2026-04-17 02:52:56.871108 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-17 02:52:57.510257 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:57.510350 | orchestrator | 2026-04-17 02:52:57.510361 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-17 02:52:57.882515 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:57.882601 | orchestrator | 2026-04-17 02:52:57.882611 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-17 02:52:58.239161 | orchestrator | changed: [testbed-manager] 2026-04-17 02:52:58.239287 | orchestrator | 2026-04-17 02:52:58.239298 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-17 02:52:58.288620 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:52:58.288688 | orchestrator | 2026-04-17 02:52:58.288695 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-17 02:52:58.356313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-17 02:52:58.356421 | orchestrator | 2026-04-17 02:52:58.356442 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-17 02:52:58.398698 | orchestrator | ok: [testbed-manager] 2026-04-17 02:52:58.398779 | orchestrator | 2026-04-17 02:52:58.398787 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-17 02:53:00.429806 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-17 02:53:00.429914 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-17 02:53:00.429930 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-17 02:53:00.429940 | orchestrator | 2026-04-17 02:53:00.429952 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-17 02:53:01.146368 | orchestrator | changed: [testbed-manager] 2026-04-17 02:53:01.146466 | orchestrator | 2026-04-17 02:53:01.146488 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-17 02:53:01.858481 | orchestrator | changed: [testbed-manager] 2026-04-17 02:53:01.858585 | orchestrator | 2026-04-17 02:53:01.858603 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-17 02:53:02.536127 | orchestrator | changed: [testbed-manager] 2026-04-17 02:53:02.536272 | orchestrator | 2026-04-17 02:53:02.536286 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-17 02:53:02.602403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-17 02:53:02.602519 | orchestrator | 2026-04-17 02:53:02.602541 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-17 02:53:02.653559 | orchestrator | ok: [testbed-manager] 2026-04-17 02:53:02.653650 | orchestrator | 2026-04-17 02:53:02.653664 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-17 02:53:03.345620 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-17 02:53:03.345704 | orchestrator | 2026-04-17 02:53:03.345716 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-17 02:53:03.431534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-17 02:53:03.431664 | orchestrator | 2026-04-17 02:53:03.431687 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-17 02:53:04.147750 | orchestrator | changed: [testbed-manager] 2026-04-17 02:53:04.147839 | orchestrator | 2026-04-17 02:53:04.147851 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-17 02:53:04.775494 | orchestrator | ok: [testbed-manager] 2026-04-17 02:53:04.775634 | orchestrator | 2026-04-17 02:53:04.775656 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-17 02:53:04.828433 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:53:04.828507 | orchestrator | 2026-04-17 02:53:04.828514 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-17 02:53:04.888092 | orchestrator | ok: [testbed-manager] 2026-04-17 02:53:04.888159 | orchestrator | 2026-04-17 02:53:04.888188 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-17 02:53:05.696226 | orchestrator | changed: [testbed-manager] 2026-04-17 02:53:05.696311 | orchestrator | 2026-04-17 02:53:05.696321 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-17 02:54:12.175959 | orchestrator | changed: [testbed-manager] 2026-04-17 02:54:12.176046 | orchestrator | 2026-04-17 02:54:12.176055 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-17 02:54:13.155829 | orchestrator | ok: [testbed-manager] 2026-04-17 02:54:13.156086 | orchestrator | 2026-04-17 02:54:13.156139 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-17 02:54:13.214287 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:54:13.214381 | orchestrator | 2026-04-17 02:54:13.214397 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-17 02:54:15.590451 | orchestrator | changed: [testbed-manager] 2026-04-17 02:54:15.590597 | orchestrator | 2026-04-17 02:54:15.590618 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-17 02:54:15.641215 | orchestrator | ok: [testbed-manager] 2026-04-17 02:54:15.641305 | orchestrator | 2026-04-17 02:54:15.641317 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-17 02:54:15.641327 | orchestrator | 2026-04-17 02:54:15.641335 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-17 02:54:15.789610 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:54:15.789706 | orchestrator | 2026-04-17 02:54:15.789717 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-17 02:55:15.847850 | orchestrator | Pausing for 60 seconds 2026-04-17 02:55:15.847962 | orchestrator | changed: [testbed-manager] 2026-04-17 02:55:15.847976 | orchestrator | 2026-04-17 02:55:15.847988 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-17 02:55:18.416988 | orchestrator | changed: [testbed-manager] 2026-04-17 02:55:18.417071 | orchestrator | 2026-04-17 02:55:18.417083 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-17 02:55:59.992639 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-17 02:55:59.992743 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-17 02:55:59.992756 | orchestrator | changed: [testbed-manager] 2026-04-17 02:55:59.992767 | orchestrator | 2026-04-17 02:55:59.992797 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-17 02:56:09.952450 | orchestrator | changed: [testbed-manager] 2026-04-17 02:56:09.952556 | orchestrator | 2026-04-17 02:56:09.952578 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-17 02:56:10.035295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-17 02:56:10.035399 | orchestrator | 2026-04-17 02:56:10.035415 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-17 02:56:10.035427 | orchestrator | 2026-04-17 02:56:10.035437 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-17 02:56:10.092370 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:56:10.092459 | orchestrator | 2026-04-17 02:56:10.092475 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-17 02:56:10.156681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-17 02:56:10.156766 | orchestrator | 2026-04-17 02:56:10.156778 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-17 02:56:10.896389 | orchestrator | changed: [testbed-manager] 2026-04-17 02:56:10.896489 | orchestrator | 2026-04-17 02:56:10.896504 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-17 02:56:14.192031 | orchestrator | ok: [testbed-manager] 2026-04-17 02:56:14.192106 | orchestrator | 2026-04-17 02:56:14.192114 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-17 02:56:14.248673 | orchestrator | ok: [testbed-manager] => { 2026-04-17 02:56:14.248759 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-17 02:56:14.248779 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-17 02:56:14.248799 | orchestrator | "Checking running containers against expected versions...", 2026-04-17 02:56:14.248813 | orchestrator | "", 2026-04-17 02:56:14.248826 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-17 02:56:14.248838 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-17 02:56:14.248851 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.248864 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-17 02:56:14.248876 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.248889 | orchestrator | "", 2026-04-17 02:56:14.248902 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-17 02:56:14.248916 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-17 02:56:14.248954 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.248963 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-17 02:56:14.248971 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.248978 | orchestrator | "", 2026-04-17 02:56:14.248985 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-17 02:56:14.248993 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-17 02:56:14.249000 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249007 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-17 02:56:14.249015 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249022 | orchestrator | "", 2026-04-17 02:56:14.249029 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-17 02:56:14.249036 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-17 02:56:14.249044 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249051 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-17 02:56:14.249058 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249066 | orchestrator | "", 2026-04-17 02:56:14.249074 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-17 02:56:14.249085 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-17 02:56:14.249094 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249103 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-17 02:56:14.249111 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249120 | orchestrator | "", 2026-04-17 02:56:14.249128 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-17 02:56:14.249137 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249146 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249154 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249163 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249172 | orchestrator | "", 2026-04-17 02:56:14.249181 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-17 02:56:14.249253 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-17 02:56:14.249264 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249274 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-17 02:56:14.249285 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249295 | orchestrator | "", 2026-04-17 02:56:14.249305 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-17 02:56:14.249315 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-17 02:56:14.249325 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249336 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-17 02:56:14.249346 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249356 | orchestrator | "", 2026-04-17 02:56:14.249366 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-17 02:56:14.249376 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-17 02:56:14.249387 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249397 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-17 02:56:14.249406 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249416 | orchestrator | "", 2026-04-17 02:56:14.249426 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-17 02:56:14.249436 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-17 02:56:14.249446 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249456 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-17 02:56:14.249467 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249477 | orchestrator | "", 2026-04-17 02:56:14.249486 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-17 02:56:14.249496 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249514 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249524 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249534 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249544 | orchestrator | "", 2026-04-17 02:56:14.249554 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-17 02:56:14.249564 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249574 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249584 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249592 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249602 | orchestrator | "", 2026-04-17 02:56:14.249611 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-17 02:56:14.249620 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249629 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249638 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249647 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249656 | orchestrator | "", 2026-04-17 02:56:14.249665 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-17 02:56:14.249674 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249684 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249693 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249720 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249729 | orchestrator | "", 2026-04-17 02:56:14.249738 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-17 02:56:14.249747 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249755 | orchestrator | " Enabled: true", 2026-04-17 02:56:14.249773 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-17 02:56:14.249782 | orchestrator | " Status: ✅ MATCH", 2026-04-17 02:56:14.249791 | orchestrator | "", 2026-04-17 02:56:14.249800 | orchestrator | "=== Summary ===", 2026-04-17 02:56:14.249808 | orchestrator | "Errors (version mismatches): 0", 2026-04-17 02:56:14.249817 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-17 02:56:14.249826 | orchestrator | "", 2026-04-17 02:56:14.249834 | orchestrator | "✅ All running containers match expected versions!" 2026-04-17 02:56:14.249843 | orchestrator | ] 2026-04-17 02:56:14.249852 | orchestrator | } 2026-04-17 02:56:14.249861 | orchestrator | 2026-04-17 02:56:14.249870 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-17 02:56:14.298701 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:56:14.298800 | orchestrator | 2026-04-17 02:56:14.298816 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 02:56:14.298829 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-17 02:56:14.298841 | orchestrator | 2026-04-17 02:56:14.392070 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 02:56:14.392159 | orchestrator | + deactivate 2026-04-17 02:56:14.392171 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-17 02:56:14.392235 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 02:56:14.392246 | orchestrator | + export PATH 2026-04-17 02:56:14.392255 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-17 02:56:14.392265 | orchestrator | + '[' -n '' ']' 2026-04-17 02:56:14.392274 | orchestrator | + hash -r 2026-04-17 02:56:14.392283 | orchestrator | + '[' -n '' ']' 2026-04-17 02:56:14.392292 | orchestrator | + unset VIRTUAL_ENV 2026-04-17 02:56:14.392301 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-17 02:56:14.392310 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-17 02:56:14.392319 | orchestrator | + unset -f deactivate 2026-04-17 02:56:14.392329 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-17 02:56:14.400849 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 02:56:14.400948 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-17 02:56:14.400963 | orchestrator | + local max_attempts=60 2026-04-17 02:56:14.401009 | orchestrator | + local name=ceph-ansible 2026-04-17 02:56:14.401022 | orchestrator | + local attempt_num=1 2026-04-17 02:56:14.401927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 02:56:14.427157 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 02:56:14.427317 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-17 02:56:14.427344 | orchestrator | + local max_attempts=60 2026-04-17 02:56:14.427366 | orchestrator | + local name=kolla-ansible 2026-04-17 02:56:14.427386 | orchestrator | + local attempt_num=1 2026-04-17 02:56:14.427518 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-17 02:56:14.455534 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 02:56:14.455708 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-17 02:56:14.455737 | orchestrator | + local max_attempts=60 2026-04-17 02:56:14.455754 | orchestrator | + local name=osism-ansible 2026-04-17 02:56:14.455774 | orchestrator | + local attempt_num=1 2026-04-17 02:56:14.455903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-17 02:56:14.489523 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 02:56:14.489616 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-17 02:56:14.489643 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-17 02:56:15.140702 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-17 02:56:15.325983 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-17 02:56:15.326137 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-17 02:56:15.326155 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-17 02:56:15.326165 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-17 02:56:15.326176 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-17 02:56:15.326302 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-04-17 02:56:15.326315 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-04-17 02:56:15.326323 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 57 seconds (healthy) 2026-04-17 02:56:15.326332 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-04-17 02:56:15.326341 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-04-17 02:56:15.326350 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-04-17 02:56:15.326358 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-04-17 02:56:15.326367 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-17 02:56:15.326395 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-17 02:56:15.326405 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-17 02:56:15.326414 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-04-17 02:56:15.332588 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-17 02:56:15.386449 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 02:56:15.386553 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-17 02:56:15.391115 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-17 02:56:27.580920 | orchestrator | 2026-04-17 02:56:27 | INFO  | Task 9636824e-f824-4d5c-aa36-14789b2b3ce3 (resolvconf) was prepared for execution. 2026-04-17 02:56:27.581016 | orchestrator | 2026-04-17 02:56:27 | INFO  | It takes a moment until task 9636824e-f824-4d5c-aa36-14789b2b3ce3 (resolvconf) has been started and output is visible here. 2026-04-17 02:56:42.184162 | orchestrator | 2026-04-17 02:56:42.184289 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-17 02:56:42.184305 | orchestrator | 2026-04-17 02:56:42.184312 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 02:56:42.184317 | orchestrator | Friday 17 April 2026 02:56:31 +0000 (0:00:00.137) 0:00:00.137 ********** 2026-04-17 02:56:42.184323 | orchestrator | ok: [testbed-manager] 2026-04-17 02:56:42.184329 | orchestrator | 2026-04-17 02:56:42.184335 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-17 02:56:42.184341 | orchestrator | Friday 17 April 2026 02:56:36 +0000 (0:00:04.706) 0:00:04.843 ********** 2026-04-17 02:56:42.184346 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:56:42.184353 | orchestrator | 2026-04-17 02:56:42.184358 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-17 02:56:42.184363 | orchestrator | Friday 17 April 2026 02:56:36 +0000 (0:00:00.065) 0:00:04.909 ********** 2026-04-17 02:56:42.184369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-17 02:56:42.184375 | orchestrator | 2026-04-17 02:56:42.184380 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-17 02:56:42.184385 | orchestrator | Friday 17 April 2026 02:56:36 +0000 (0:00:00.084) 0:00:04.994 ********** 2026-04-17 02:56:42.184405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 02:56:42.184411 | orchestrator | 2026-04-17 02:56:42.184416 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-17 02:56:42.184421 | orchestrator | Friday 17 April 2026 02:56:36 +0000 (0:00:00.081) 0:00:05.076 ********** 2026-04-17 02:56:42.184426 | orchestrator | ok: [testbed-manager] 2026-04-17 02:56:42.184431 | orchestrator | 2026-04-17 02:56:42.184436 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-17 02:56:42.184441 | orchestrator | Friday 17 April 2026 02:56:37 +0000 (0:00:01.054) 0:00:06.130 ********** 2026-04-17 02:56:42.184446 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:56:42.184451 | orchestrator | 2026-04-17 02:56:42.184457 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-17 02:56:42.184462 | orchestrator | Friday 17 April 2026 02:56:37 +0000 (0:00:00.056) 0:00:06.187 ********** 2026-04-17 02:56:42.184483 | orchestrator | ok: [testbed-manager] 2026-04-17 02:56:42.184489 | orchestrator | 2026-04-17 02:56:42.184494 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-17 02:56:42.184499 | orchestrator | Friday 17 April 2026 02:56:38 +0000 (0:00:00.492) 0:00:06.679 ********** 2026-04-17 02:56:42.184504 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:56:42.184509 | orchestrator | 2026-04-17 02:56:42.184514 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-17 02:56:42.184522 | orchestrator | Friday 17 April 2026 02:56:38 +0000 (0:00:00.081) 0:00:06.760 ********** 2026-04-17 02:56:42.184530 | orchestrator | changed: [testbed-manager] 2026-04-17 02:56:42.184540 | orchestrator | 2026-04-17 02:56:42.184551 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-17 02:56:42.184560 | orchestrator | Friday 17 April 2026 02:56:38 +0000 (0:00:00.523) 0:00:07.284 ********** 2026-04-17 02:56:42.184567 | orchestrator | changed: [testbed-manager] 2026-04-17 02:56:42.184575 | orchestrator | 2026-04-17 02:56:42.184582 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-17 02:56:42.184591 | orchestrator | Friday 17 April 2026 02:56:39 +0000 (0:00:01.025) 0:00:08.310 ********** 2026-04-17 02:56:42.184599 | orchestrator | ok: [testbed-manager] 2026-04-17 02:56:42.184608 | orchestrator | 2026-04-17 02:56:42.184613 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-17 02:56:42.184618 | orchestrator | Friday 17 April 2026 02:56:40 +0000 (0:00:00.984) 0:00:09.295 ********** 2026-04-17 02:56:42.184623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-17 02:56:42.184628 | orchestrator | 2026-04-17 02:56:42.184633 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-17 02:56:42.184638 | orchestrator | Friday 17 April 2026 02:56:40 +0000 (0:00:00.071) 0:00:09.367 ********** 2026-04-17 02:56:42.184643 | orchestrator | changed: [testbed-manager] 2026-04-17 02:56:42.184648 | orchestrator | 2026-04-17 02:56:42.184653 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 02:56:42.184659 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 02:56:42.184664 | orchestrator | 2026-04-17 02:56:42.184669 | orchestrator | 2026-04-17 02:56:42.184674 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 02:56:42.184678 | orchestrator | Friday 17 April 2026 02:56:41 +0000 (0:00:01.093) 0:00:10.460 ********** 2026-04-17 02:56:42.184683 | orchestrator | =============================================================================== 2026-04-17 02:56:42.184688 | orchestrator | Gathering Facts --------------------------------------------------------- 4.71s 2026-04-17 02:56:42.184694 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2026-04-17 02:56:42.184700 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2026-04-17 02:56:42.184706 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2026-04-17 02:56:42.184712 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-04-17 02:56:42.184718 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2026-04-17 02:56:42.184737 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-04-17 02:56:42.184743 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-17 02:56:42.184749 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-04-17 02:56:42.184755 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-17 02:56:42.184761 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-04-17 02:56:42.184766 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-04-17 02:56:42.184779 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-17 02:56:42.449047 | orchestrator | + osism apply sshconfig 2026-04-17 02:56:54.419919 | orchestrator | 2026-04-17 02:56:54 | INFO  | Task 88b495a2-d9f5-41e8-93b1-189ace0d9030 (sshconfig) was prepared for execution. 2026-04-17 02:56:54.420005 | orchestrator | 2026-04-17 02:56:54 | INFO  | It takes a moment until task 88b495a2-d9f5-41e8-93b1-189ace0d9030 (sshconfig) has been started and output is visible here. 2026-04-17 02:57:05.420917 | orchestrator | 2026-04-17 02:57:05.421007 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-17 02:57:05.421016 | orchestrator | 2026-04-17 02:57:05.421038 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-17 02:57:05.421044 | orchestrator | Friday 17 April 2026 02:56:58 +0000 (0:00:00.120) 0:00:00.120 ********** 2026-04-17 02:57:05.421050 | orchestrator | ok: [testbed-manager] 2026-04-17 02:57:05.421056 | orchestrator | 2026-04-17 02:57:05.421061 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-17 02:57:05.421067 | orchestrator | Friday 17 April 2026 02:56:58 +0000 (0:00:00.482) 0:00:00.602 ********** 2026-04-17 02:57:05.421079 | orchestrator | changed: [testbed-manager] 2026-04-17 02:57:05.421086 | orchestrator | 2026-04-17 02:57:05.421091 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-17 02:57:05.421096 | orchestrator | Friday 17 April 2026 02:56:59 +0000 (0:00:00.462) 0:00:01.065 ********** 2026-04-17 02:57:05.421102 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-17 02:57:05.421107 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-17 02:57:05.421113 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-17 02:57:05.421118 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-17 02:57:05.421123 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-17 02:57:05.421128 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-17 02:57:05.421133 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-17 02:57:05.421138 | orchestrator | 2026-04-17 02:57:05.421143 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-17 02:57:05.421149 | orchestrator | Friday 17 April 2026 02:57:04 +0000 (0:00:05.176) 0:00:06.241 ********** 2026-04-17 02:57:05.421154 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:57:05.421159 | orchestrator | 2026-04-17 02:57:05.421164 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-17 02:57:05.421169 | orchestrator | Friday 17 April 2026 02:57:04 +0000 (0:00:00.085) 0:00:06.327 ********** 2026-04-17 02:57:05.421174 | orchestrator | changed: [testbed-manager] 2026-04-17 02:57:05.421179 | orchestrator | 2026-04-17 02:57:05.421238 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 02:57:05.421249 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 02:57:05.421259 | orchestrator | 2026-04-17 02:57:05.421267 | orchestrator | 2026-04-17 02:57:05.421275 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 02:57:05.421284 | orchestrator | Friday 17 April 2026 02:57:05 +0000 (0:00:00.575) 0:00:06.902 ********** 2026-04-17 02:57:05.421291 | orchestrator | =============================================================================== 2026-04-17 02:57:05.421296 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.18s 2026-04-17 02:57:05.421301 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2026-04-17 02:57:05.421306 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.48s 2026-04-17 02:57:05.421311 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.46s 2026-04-17 02:57:05.421334 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-04-17 02:57:05.686469 | orchestrator | + osism apply known-hosts 2026-04-17 02:57:17.716996 | orchestrator | 2026-04-17 02:57:17 | INFO  | Task ae937d4c-7605-4799-817d-d44324014c33 (known-hosts) was prepared for execution. 2026-04-17 02:57:17.717085 | orchestrator | 2026-04-17 02:57:17 | INFO  | It takes a moment until task ae937d4c-7605-4799-817d-d44324014c33 (known-hosts) has been started and output is visible here. 2026-04-17 02:57:33.955218 | orchestrator | 2026-04-17 02:57:33.955344 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-17 02:57:33.955371 | orchestrator | 2026-04-17 02:57:33.955389 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-17 02:57:33.955407 | orchestrator | Friday 17 April 2026 02:57:21 +0000 (0:00:00.141) 0:00:00.141 ********** 2026-04-17 02:57:33.955424 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-17 02:57:33.955442 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-17 02:57:33.955458 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-17 02:57:33.955475 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-17 02:57:33.955491 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-17 02:57:33.955507 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-17 02:57:33.955524 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-17 02:57:33.955541 | orchestrator | 2026-04-17 02:57:33.955558 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-17 02:57:33.955576 | orchestrator | Friday 17 April 2026 02:57:27 +0000 (0:00:05.822) 0:00:05.963 ********** 2026-04-17 02:57:33.955593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-17 02:57:33.955611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-17 02:57:33.955632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-17 02:57:33.955649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-17 02:57:33.955663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-17 02:57:33.955688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-17 02:57:33.955699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-17 02:57:33.955710 | orchestrator | 2026-04-17 02:57:33.955721 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:33.955732 | orchestrator | Friday 17 April 2026 02:57:27 +0000 (0:00:00.176) 0:00:06.140 ********** 2026-04-17 02:57:33.955743 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHv/SX5Cj97d+2zIOmcTM5fBIt9rUYYeawmOnB3KLjwa) 2026-04-17 02:57:33.955763 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOxEmwKoAvNuT0Uj/lbTl8nRtfIDBpK0C6qY/swBFQhYmTgU5kenTpIEeh1kTO89RWxsx9pQJdgFgidOJkWScMczCAFMmZcaN2Qol1OlWclwYMpL7m1z7lftyrGsQB0b9stMwf20MZGDLa26x70S1YakE1J9huKgrRJ134Hv2qhLdnGpBMVBmridJn33E6djYjB8MFJB/BhhASHZ000o84ZorPP+vCexmuJOxyy5264sIIQ3wExBB1Qy93YVVHp/N9oNGNmEqxCivH3KoIiYDEYsieLZ/C2LEwEZ89DaD1izUbacbBiSGBBWegfCoydUYfC/AMWbOdlhPBRL4T0dMj5f/mnZc3cGksrGtg+MhcsqrtvuDgT5E57cRRBcwzc5wMJ/m817bj+BEuyZ+JKUz5kvjRok28WIiQweomMd4ctWVrE22pAj8XUWzRG29UNsPjZDLj3ieSY0O8vxvuawhEPOrIiEOU6xptf2TKHS3b+GVjOGaH9kVSXmQYm8Km4YM=) 2026-04-17 02:57:33.955799 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDPXlrIzHjhaBKuGsufKA8174wnGma8MxHxMNnspGSCBZB+8qNv1vVzczJ9XCYhRJqa5yzirCFtDHVWFirByd60=) 2026-04-17 02:57:33.955810 | orchestrator | 2026-04-17 02:57:33.955820 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:33.955828 | orchestrator | Friday 17 April 2026 02:57:28 +0000 (0:00:01.160) 0:00:07.300 ********** 2026-04-17 02:57:33.955854 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoOCSnEkZpnvPAwabY1vgJDtTNt7k2n+RG2dYDeLjf5qsAZXc9paarxLizah1jB9fPiHeqtubeIhujK78dM7qaIldPt6CM3EUKFvFhnuZLgcOamgeM1Z8QS7NM1d19vaRLr1rfExTMLdmgOH5J49GsMntyH7rKEA2ZeIeD0c6HEaydFmSUV7PSDU7qh5KMhy6gbtdH/sRqZ7/wmdHIG8HQGXRvK6yEMXPHKD6TR7lTXKRbYc8WZyhQxEpkCWzvUf+ajfveUnodPCTpPC0Srd6qL78d3OoXuXdAb7jn4VNEP7Y0QX/cksS0Ki98qWN1Vt0o5XaMEyAbEIS4zV9ezCK0kY4yiDAYAmgqXi0acnuoMtxX447Oa989yKqZjFAvnKRDfEJWztw02h05w4Ke5nIjsSPAH/RnbD3or7IKWw4M6lGcjNanfOMPtKfhDwkJz0Rs4VsK1b7V7Iz2EZx2gTZwzc5k+hhWdhjbgoMxqozGADv1dGnEXVHX3WOKIxgjaC8=) 2026-04-17 02:57:33.955865 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOb8WjMwUOK3l2ZFFcOMILtkP9VSCzP6cDZK4duQqeWw/cZG6m0tT5CIb3kXRtpaT0jem5zp6z0wXCJgWQ1lXe0=) 2026-04-17 02:57:33.955874 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINWzbf3mLefnh2Vh/Z/x6J5CGzeWVMpsmu6DkKmS6gL3) 2026-04-17 02:57:33.955883 | orchestrator | 2026-04-17 02:57:33.955893 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:33.955902 | orchestrator | Friday 17 April 2026 02:57:29 +0000 (0:00:01.027) 0:00:08.327 ********** 2026-04-17 02:57:33.955912 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCps32TgDCZ7fT6jX83CBYC/aT1wZ/jb+on/KnJdQmWA1/bhaMARVpbpgXjLusZl098xRw56PNLG29QiAS8KZaNrSawBxgI52rXmEO35Tx3ayx+hFrBN78hTO0BndRHn7/TFyvpAol20EXeUqE5z9ss7x8I5JATqMalRbfn1mOg0Ltm1fXvnMR/P3wjCqFbyo//aX/8CzQMi0TIExzw2HD6c0zqE6RqHGGKIdUxsyY+ilhLgtw+0/10s+7kGMwRuqVjA2ZQRc6FXEP07Y+YWY0o7hky0W6k0DaN4Dl0U5ixu1vxNCb2sWacRFACPo3Ryy5lcEaMQiJkgZKnOzrrU8bNNRMswOavoptLQidUq+RDgM4E1ryfb0zMx3u/D2zAZ3Ra2FwVJ0+IPQSUIP1vUIXDHqn/bR5XHXw5OvSPvm+c2Zxjwff+/TOVngQz0+SMSWSseRZsiCH/dKTs/abbceU9M/GIcvhd4uiCIA/RvEKvDNmKjj12TwxeoqmbPTsSNcM=) 2026-04-17 02:57:33.955921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDiuilS7KI8EDy908hBxAWcmEJkmPHP9WkHqMu8FvsZj2XnVkYlUTzYwKwJod2j+MfxJGRyUnwLONdxLxr/5eus=) 2026-04-17 02:57:33.955931 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ3UPXT4wPNo2snzsYRs2IkDRJT/45W5pV3XTBxaiuQ3) 2026-04-17 02:57:33.955940 | orchestrator | 2026-04-17 02:57:33.955949 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:33.955958 | orchestrator | Friday 17 April 2026 02:57:30 +0000 (0:00:01.058) 0:00:09.386 ********** 2026-04-17 02:57:33.955969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFt9TrJCfo6fsek76DnSzXK0VkAD83hn4vGA04JZAzMGkfBJvAnlrbndYSQ+7JUSGTM6PWlo2mBEnYbqCqVAvp4=) 2026-04-17 02:57:33.955985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfE8rOeGCBM2IWv0C47SY0BsFN/WoG6c7yoPw/r/C0A) 2026-04-17 02:57:33.955998 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5ISeSyEo15YtTiaOP4/i/dWnFiTCrc80sAZWDl5nK/XfE6EU4vv4irpXorrqnpRt6OLbyD/n6SYyhP3fQXpCa8nzTLTGW44FlykEDRw59+cHIEaylhARZu/hQL+3KzCjCk9UI+WRfoTiAwhQu5/YzHFGtJg4sJqWkpOQ1+xf3Zk03BVylIbwbGQKfBRBIkJlpg6gKlqenJuNp3wyaw1rBAY+tdtCux1ffmXkeXXfry/QU4g0+Ro3zzfs1oUA0YfgYZ09bhG/0cQDa56zbuRkKEViNP519kRxtvOS6Y46d45SBl9elApqDrnS4hyGz4dU4/6RSsbHCYe2L5553YsL93mqJFtpSst8lnw251VbmLSC26Hln4o/f/0tP7XvT9Z/kbPUGMk0+NVio/u3+jma6MUtofcQXzjh9uKwzardr6dvYH9KHsF2YNpihwqIUsora9jB9evZEPbjKJHPxo3K+zZks0DCh0DHpkjld8v27Vq/CE4ZL5Tc1vcTZasx6kFE=) 2026-04-17 02:57:33.956023 | orchestrator | 2026-04-17 02:57:33.956038 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:33.956052 | orchestrator | Friday 17 April 2026 02:57:31 +0000 (0:00:01.032) 0:00:10.418 ********** 2026-04-17 02:57:33.956123 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAsR597uJG8gaCI8IzRPVRCPX1FxDDBJ9FlWAKjPfl0sUzL2lXde0szoPlhL8LwBzaJnm81xFyorJ64p2XjRDNc=) 2026-04-17 02:57:33.956133 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVIF3TrCEhxHQxPuxMo3yzOuNX0SWDPIw/nUg45fFB9Txk2LH1JSDlw2lEFPy9RaOWIxi58K5fsofPsJGN1DCSZ/lmnKZP5UFO6iOq+uX51iInmNSHU1Fq6P3kJV9REm/GKncgG5RwmUgiJF18lrOwKdXpmIiRdnOR0AFOAbIdecm5/U2tu2FreCqNLPaHlAOynVkg3JTe6xH4xQRq+HMfSBW65rnKJhul5+xeqgdY2dxEXA3OieRSSW8lv8i7ZOeXgp7hxouYFMbfy6lfsdMlLEjWa5br3gQbTR5H101FZ6L49oUBMrxajIjJlg9NcL5jaFS5+VEBJfvGTWFt33A6HUfq997sRjF6mtoDRHiWnZy0J7hbpDRLCdb+yAAgJLoUQ4TViwh6JQdUt39pYCUXWGOnEnTlyYfIi8rxX/YsQjltnE6Z5dqtn6Nu1OC2MdoaKRE24tujES1DcSlhQkcHAaEy6yhS5idJ8F+MA+23l0Xsj91YdzzEZZ3YIfbHtSs=) 2026-04-17 02:57:33.956141 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGdEt40KfdDSPqH354krwbFM58VEb3wU/+zYXgEQKrcL) 2026-04-17 02:57:33.956149 | orchestrator | 2026-04-17 02:57:33.956157 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:33.956164 | orchestrator | Friday 17 April 2026 02:57:32 +0000 (0:00:01.064) 0:00:11.483 ********** 2026-04-17 02:57:33.956181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6FmxwD2Y82W7T7lneinf3IqqhsBb6t656gr4oUlTAIYQU6bPpogLef+sqtWdpnqsFEK9ORNGyEMPd/uZSKRpfGTzvkycLIv/ZSG4huTHPUZ9dMhiXCFNRRKPUSHamr9BVcbLHSpBBJmP04F1Mgh4v1bg02gb7VXHLncnlIkMBHb/gAqfXh1JR/4WEgylYHpFZVKqaBqWg1q/vyX+6XaovjRXtccUeKtgXor8ptYk0dh9DYk4VdhGD6IQfjzCfyNroKFXhE4UktXBbvoOMyTq6zdMWE83h1atEUxpyxBZCxA+Uq/xQQEtaNh+BnFrD0R9XGL2gsH9b0C0hsGFlTOr12r9mYvAWHPFzVvE2X7n5mQpc4PhBDYhD/hEBpneeBRnqMUE1Bi/uD6i6wJBGKjGazTweO2CtEoqf26JEBSmCS7UjnO89mluYh200R4HXLqeXnh53j0WMOreUWMyPqR+r3ayAnmM1QFbrF2c/urhBAS5rVGLJrk5cSscrYedm/Ok=) 2026-04-17 02:57:44.491057 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN03K7gX1yyWOzAAxcnn5nTXnIxfOOKR47P+ZyLr9rVyA4wcg0VY3HDgW/p9KkMIJrNHI6T2INqU+on3uUiNLX0=) 2026-04-17 02:57:44.491173 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILcP4bP1xDuiHP7vs5P8DGZtiy+LRQsexwSlt7pUKsKM) 2026-04-17 02:57:44.491274 | orchestrator | 2026-04-17 02:57:44.491294 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:44.491310 | orchestrator | Friday 17 April 2026 02:57:33 +0000 (0:00:01.056) 0:00:12.540 ********** 2026-04-17 02:57:44.491326 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH1gj2gdUdNMiXxEi3ccvf/13WO+BAd9tK2WSlxARsAOqzfH+OxnEKbgKgXeqjM5RqqjAhrsGDoLNlFzPKwy95o=) 2026-04-17 02:57:44.491344 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgRn2m4OjQRKGOvSBkn46Hm6InkYQyhVS5ieKXJW/gzVQPSIyBBVh632Lm5wZHAdAG3Vw/hJbQrbsKFnLcchVtXfhGUQma7lW/W9mkBrKLdpE/RvVneJpIugxhxCn2U/uhqPTE2a6ULEkw1B406Ngq/SpHX19JoKj+oKBchOygCrsBBI5SuPU1ZL6CO1gpfri2ECfonQer5cKI6hjq6ow7bsm8CAkYPGySYMFFv+BGOLHuiTIkytcuTf84dxEhu1K+m/OhvB2zIBFuRvXHFsYM2Webo2ZIHOMWinT+1xm1Yea/OxH7N8Rzj0gWB0Nten0nHxyAwff+ZBSG9oXSjCvsNKUmQrXOxnz4RYoNjdojN0g38GswPD5fN50gHo6xRpX6wxr8VVX8GbXlD68RYl+Pc5a2O+yZbuL5oDxTMbReJ+1v9atyZn0USxijeVPEcERcrI4ERWGNCNqr46kiMfjHLOtgtvqhy6eWXhOWIuJ/RMOWVny3+fq4c4MhOAX96DM=) 2026-04-17 02:57:44.491417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhceEQg3xlXj6Xyf0UT3STx1WrdkZNJft7SP9kQ+ZTD) 2026-04-17 02:57:44.491434 | orchestrator | 2026-04-17 02:57:44.491449 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-17 02:57:44.491465 | orchestrator | Friday 17 April 2026 02:57:34 +0000 (0:00:01.034) 0:00:13.574 ********** 2026-04-17 02:57:44.491480 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-17 02:57:44.491495 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-17 02:57:44.491509 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-17 02:57:44.491524 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-17 02:57:44.491537 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-17 02:57:44.491549 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-17 02:57:44.491560 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-17 02:57:44.491570 | orchestrator | 2026-04-17 02:57:44.491581 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-17 02:57:44.491594 | orchestrator | Friday 17 April 2026 02:57:40 +0000 (0:00:05.155) 0:00:18.729 ********** 2026-04-17 02:57:44.491606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-17 02:57:44.491621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-17 02:57:44.491631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-17 02:57:44.491642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-17 02:57:44.491654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-17 02:57:44.491664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-17 02:57:44.491675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-17 02:57:44.491687 | orchestrator | 2026-04-17 02:57:44.491697 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:44.491708 | orchestrator | Friday 17 April 2026 02:57:40 +0000 (0:00:00.186) 0:00:18.916 ********** 2026-04-17 02:57:44.491720 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHv/SX5Cj97d+2zIOmcTM5fBIt9rUYYeawmOnB3KLjwa) 2026-04-17 02:57:44.491773 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOxEmwKoAvNuT0Uj/lbTl8nRtfIDBpK0C6qY/swBFQhYmTgU5kenTpIEeh1kTO89RWxsx9pQJdgFgidOJkWScMczCAFMmZcaN2Qol1OlWclwYMpL7m1z7lftyrGsQB0b9stMwf20MZGDLa26x70S1YakE1J9huKgrRJ134Hv2qhLdnGpBMVBmridJn33E6djYjB8MFJB/BhhASHZ000o84ZorPP+vCexmuJOxyy5264sIIQ3wExBB1Qy93YVVHp/N9oNGNmEqxCivH3KoIiYDEYsieLZ/C2LEwEZ89DaD1izUbacbBiSGBBWegfCoydUYfC/AMWbOdlhPBRL4T0dMj5f/mnZc3cGksrGtg+MhcsqrtvuDgT5E57cRRBcwzc5wMJ/m817bj+BEuyZ+JKUz5kvjRok28WIiQweomMd4ctWVrE22pAj8XUWzRG29UNsPjZDLj3ieSY0O8vxvuawhEPOrIiEOU6xptf2TKHS3b+GVjOGaH9kVSXmQYm8Km4YM=) 2026-04-17 02:57:44.491787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDPXlrIzHjhaBKuGsufKA8174wnGma8MxHxMNnspGSCBZB+8qNv1vVzczJ9XCYhRJqa5yzirCFtDHVWFirByd60=) 2026-04-17 02:57:44.491810 | orchestrator | 2026-04-17 02:57:44.491823 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:44.491835 | orchestrator | Friday 17 April 2026 02:57:41 +0000 (0:00:01.013) 0:00:19.930 ********** 2026-04-17 02:57:44.491848 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoOCSnEkZpnvPAwabY1vgJDtTNt7k2n+RG2dYDeLjf5qsAZXc9paarxLizah1jB9fPiHeqtubeIhujK78dM7qaIldPt6CM3EUKFvFhnuZLgcOamgeM1Z8QS7NM1d19vaRLr1rfExTMLdmgOH5J49GsMntyH7rKEA2ZeIeD0c6HEaydFmSUV7PSDU7qh5KMhy6gbtdH/sRqZ7/wmdHIG8HQGXRvK6yEMXPHKD6TR7lTXKRbYc8WZyhQxEpkCWzvUf+ajfveUnodPCTpPC0Srd6qL78d3OoXuXdAb7jn4VNEP7Y0QX/cksS0Ki98qWN1Vt0o5XaMEyAbEIS4zV9ezCK0kY4yiDAYAmgqXi0acnuoMtxX447Oa989yKqZjFAvnKRDfEJWztw02h05w4Ke5nIjsSPAH/RnbD3or7IKWw4M6lGcjNanfOMPtKfhDwkJz0Rs4VsK1b7V7Iz2EZx2gTZwzc5k+hhWdhjbgoMxqozGADv1dGnEXVHX3WOKIxgjaC8=) 2026-04-17 02:57:44.491860 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOb8WjMwUOK3l2ZFFcOMILtkP9VSCzP6cDZK4duQqeWw/cZG6m0tT5CIb3kXRtpaT0jem5zp6z0wXCJgWQ1lXe0=) 2026-04-17 02:57:44.491874 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINWzbf3mLefnh2Vh/Z/x6J5CGzeWVMpsmu6DkKmS6gL3) 2026-04-17 02:57:44.491885 | orchestrator | 2026-04-17 02:57:44.491897 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:44.491909 | orchestrator | Friday 17 April 2026 02:57:42 +0000 (0:00:01.058) 0:00:20.988 ********** 2026-04-17 02:57:44.491921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCps32TgDCZ7fT6jX83CBYC/aT1wZ/jb+on/KnJdQmWA1/bhaMARVpbpgXjLusZl098xRw56PNLG29QiAS8KZaNrSawBxgI52rXmEO35Tx3ayx+hFrBN78hTO0BndRHn7/TFyvpAol20EXeUqE5z9ss7x8I5JATqMalRbfn1mOg0Ltm1fXvnMR/P3wjCqFbyo//aX/8CzQMi0TIExzw2HD6c0zqE6RqHGGKIdUxsyY+ilhLgtw+0/10s+7kGMwRuqVjA2ZQRc6FXEP07Y+YWY0o7hky0W6k0DaN4Dl0U5ixu1vxNCb2sWacRFACPo3Ryy5lcEaMQiJkgZKnOzrrU8bNNRMswOavoptLQidUq+RDgM4E1ryfb0zMx3u/D2zAZ3Ra2FwVJ0+IPQSUIP1vUIXDHqn/bR5XHXw5OvSPvm+c2Zxjwff+/TOVngQz0+SMSWSseRZsiCH/dKTs/abbceU9M/GIcvhd4uiCIA/RvEKvDNmKjj12TwxeoqmbPTsSNcM=) 2026-04-17 02:57:44.491933 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDiuilS7KI8EDy908hBxAWcmEJkmPHP9WkHqMu8FvsZj2XnVkYlUTzYwKwJod2j+MfxJGRyUnwLONdxLxr/5eus=) 2026-04-17 02:57:44.491945 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ3UPXT4wPNo2snzsYRs2IkDRJT/45W5pV3XTBxaiuQ3) 2026-04-17 02:57:44.491958 | orchestrator | 2026-04-17 02:57:44.491969 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:44.491981 | orchestrator | Friday 17 April 2026 02:57:43 +0000 (0:00:01.049) 0:00:22.038 ********** 2026-04-17 02:57:44.491993 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5ISeSyEo15YtTiaOP4/i/dWnFiTCrc80sAZWDl5nK/XfE6EU4vv4irpXorrqnpRt6OLbyD/n6SYyhP3fQXpCa8nzTLTGW44FlykEDRw59+cHIEaylhARZu/hQL+3KzCjCk9UI+WRfoTiAwhQu5/YzHFGtJg4sJqWkpOQ1+xf3Zk03BVylIbwbGQKfBRBIkJlpg6gKlqenJuNp3wyaw1rBAY+tdtCux1ffmXkeXXfry/QU4g0+Ro3zzfs1oUA0YfgYZ09bhG/0cQDa56zbuRkKEViNP519kRxtvOS6Y46d45SBl9elApqDrnS4hyGz4dU4/6RSsbHCYe2L5553YsL93mqJFtpSst8lnw251VbmLSC26Hln4o/f/0tP7XvT9Z/kbPUGMk0+NVio/u3+jma6MUtofcQXzjh9uKwzardr6dvYH9KHsF2YNpihwqIUsora9jB9evZEPbjKJHPxo3K+zZks0DCh0DHpkjld8v27Vq/CE4ZL5Tc1vcTZasx6kFE=) 2026-04-17 02:57:44.492006 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFt9TrJCfo6fsek76DnSzXK0VkAD83hn4vGA04JZAzMGkfBJvAnlrbndYSQ+7JUSGTM6PWlo2mBEnYbqCqVAvp4=) 2026-04-17 02:57:44.492029 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfE8rOeGCBM2IWv0C47SY0BsFN/WoG6c7yoPw/r/C0A) 2026-04-17 02:57:48.808395 | orchestrator | 2026-04-17 02:57:48.808504 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:48.808512 | orchestrator | Friday 17 April 2026 02:57:44 +0000 (0:00:01.033) 0:00:23.072 ********** 2026-04-17 02:57:48.808520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVIF3TrCEhxHQxPuxMo3yzOuNX0SWDPIw/nUg45fFB9Txk2LH1JSDlw2lEFPy9RaOWIxi58K5fsofPsJGN1DCSZ/lmnKZP5UFO6iOq+uX51iInmNSHU1Fq6P3kJV9REm/GKncgG5RwmUgiJF18lrOwKdXpmIiRdnOR0AFOAbIdecm5/U2tu2FreCqNLPaHlAOynVkg3JTe6xH4xQRq+HMfSBW65rnKJhul5+xeqgdY2dxEXA3OieRSSW8lv8i7ZOeXgp7hxouYFMbfy6lfsdMlLEjWa5br3gQbTR5H101FZ6L49oUBMrxajIjJlg9NcL5jaFS5+VEBJfvGTWFt33A6HUfq997sRjF6mtoDRHiWnZy0J7hbpDRLCdb+yAAgJLoUQ4TViwh6JQdUt39pYCUXWGOnEnTlyYfIi8rxX/YsQjltnE6Z5dqtn6Nu1OC2MdoaKRE24tujES1DcSlhQkcHAaEy6yhS5idJ8F+MA+23l0Xsj91YdzzEZZ3YIfbHtSs=) 2026-04-17 02:57:48.808528 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAsR597uJG8gaCI8IzRPVRCPX1FxDDBJ9FlWAKjPfl0sUzL2lXde0szoPlhL8LwBzaJnm81xFyorJ64p2XjRDNc=) 2026-04-17 02:57:48.808533 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGdEt40KfdDSPqH354krwbFM58VEb3wU/+zYXgEQKrcL) 2026-04-17 02:57:48.808539 | orchestrator | 2026-04-17 02:57:48.808543 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:48.808546 | orchestrator | Friday 17 April 2026 02:57:45 +0000 (0:00:01.030) 0:00:24.103 ********** 2026-04-17 02:57:48.808550 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILcP4bP1xDuiHP7vs5P8DGZtiy+LRQsexwSlt7pUKsKM) 2026-04-17 02:57:48.808554 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6FmxwD2Y82W7T7lneinf3IqqhsBb6t656gr4oUlTAIYQU6bPpogLef+sqtWdpnqsFEK9ORNGyEMPd/uZSKRpfGTzvkycLIv/ZSG4huTHPUZ9dMhiXCFNRRKPUSHamr9BVcbLHSpBBJmP04F1Mgh4v1bg02gb7VXHLncnlIkMBHb/gAqfXh1JR/4WEgylYHpFZVKqaBqWg1q/vyX+6XaovjRXtccUeKtgXor8ptYk0dh9DYk4VdhGD6IQfjzCfyNroKFXhE4UktXBbvoOMyTq6zdMWE83h1atEUxpyxBZCxA+Uq/xQQEtaNh+BnFrD0R9XGL2gsH9b0C0hsGFlTOr12r9mYvAWHPFzVvE2X7n5mQpc4PhBDYhD/hEBpneeBRnqMUE1Bi/uD6i6wJBGKjGazTweO2CtEoqf26JEBSmCS7UjnO89mluYh200R4HXLqeXnh53j0WMOreUWMyPqR+r3ayAnmM1QFbrF2c/urhBAS5rVGLJrk5cSscrYedm/Ok=) 2026-04-17 02:57:48.808558 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN03K7gX1yyWOzAAxcnn5nTXnIxfOOKR47P+ZyLr9rVyA4wcg0VY3HDgW/p9KkMIJrNHI6T2INqU+on3uUiNLX0=) 2026-04-17 02:57:48.808562 | orchestrator | 2026-04-17 02:57:48.808566 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 02:57:48.808570 | orchestrator | Friday 17 April 2026 02:57:46 +0000 (0:00:01.033) 0:00:25.137 ********** 2026-04-17 02:57:48.808588 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgRn2m4OjQRKGOvSBkn46Hm6InkYQyhVS5ieKXJW/gzVQPSIyBBVh632Lm5wZHAdAG3Vw/hJbQrbsKFnLcchVtXfhGUQma7lW/W9mkBrKLdpE/RvVneJpIugxhxCn2U/uhqPTE2a6ULEkw1B406Ngq/SpHX19JoKj+oKBchOygCrsBBI5SuPU1ZL6CO1gpfri2ECfonQer5cKI6hjq6ow7bsm8CAkYPGySYMFFv+BGOLHuiTIkytcuTf84dxEhu1K+m/OhvB2zIBFuRvXHFsYM2Webo2ZIHOMWinT+1xm1Yea/OxH7N8Rzj0gWB0Nten0nHxyAwff+ZBSG9oXSjCvsNKUmQrXOxnz4RYoNjdojN0g38GswPD5fN50gHo6xRpX6wxr8VVX8GbXlD68RYl+Pc5a2O+yZbuL5oDxTMbReJ+1v9atyZn0USxijeVPEcERcrI4ERWGNCNqr46kiMfjHLOtgtvqhy6eWXhOWIuJ/RMOWVny3+fq4c4MhOAX96DM=) 2026-04-17 02:57:48.808592 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH1gj2gdUdNMiXxEi3ccvf/13WO+BAd9tK2WSlxARsAOqzfH+OxnEKbgKgXeqjM5RqqjAhrsGDoLNlFzPKwy95o=) 2026-04-17 02:57:48.808597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhceEQg3xlXj6Xyf0UT3STx1WrdkZNJft7SP9kQ+ZTD) 2026-04-17 02:57:48.808600 | orchestrator | 2026-04-17 02:57:48.808604 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-17 02:57:48.808621 | orchestrator | Friday 17 April 2026 02:57:47 +0000 (0:00:01.034) 0:00:26.171 ********** 2026-04-17 02:57:48.808626 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-17 02:57:48.808630 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-17 02:57:48.808634 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-17 02:57:48.808638 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-17 02:57:48.808642 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-17 02:57:48.808646 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-17 02:57:48.808649 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-17 02:57:48.808653 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:57:48.808657 | orchestrator | 2026-04-17 02:57:48.808672 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-17 02:57:48.808676 | orchestrator | Friday 17 April 2026 02:57:47 +0000 (0:00:00.174) 0:00:26.346 ********** 2026-04-17 02:57:48.808680 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:57:48.808684 | orchestrator | 2026-04-17 02:57:48.808688 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-17 02:57:48.808695 | orchestrator | Friday 17 April 2026 02:57:47 +0000 (0:00:00.054) 0:00:26.400 ********** 2026-04-17 02:57:48.808699 | orchestrator | skipping: [testbed-manager] 2026-04-17 02:57:48.808702 | orchestrator | 2026-04-17 02:57:48.808706 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-17 02:57:48.808710 | orchestrator | Friday 17 April 2026 02:57:47 +0000 (0:00:00.050) 0:00:26.451 ********** 2026-04-17 02:57:48.808714 | orchestrator | changed: [testbed-manager] 2026-04-17 02:57:48.808718 | orchestrator | 2026-04-17 02:57:48.808721 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 02:57:48.808725 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 02:57:48.808730 | orchestrator | 2026-04-17 02:57:48.808734 | orchestrator | 2026-04-17 02:57:48.808738 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 02:57:48.808742 | orchestrator | Friday 17 April 2026 02:57:48 +0000 (0:00:00.712) 0:00:27.164 ********** 2026-04-17 02:57:48.808746 | orchestrator | =============================================================================== 2026-04-17 02:57:48.808749 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.82s 2026-04-17 02:57:48.808753 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2026-04-17 02:57:48.808758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-17 02:57:48.808762 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-17 02:57:48.808766 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-17 02:57:48.808769 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-17 02:57:48.808773 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-17 02:57:48.808777 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-17 02:57:48.808781 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 02:57:48.808785 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 02:57:48.808788 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 02:57:48.808792 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 02:57:48.808796 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 02:57:48.808800 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 02:57:48.808808 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 02:57:48.808812 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-17 02:57:48.808816 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-04-17 02:57:48.808820 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-04-17 02:57:48.808824 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-04-17 02:57:48.808828 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-04-17 02:57:49.065017 | orchestrator | + osism apply squid 2026-04-17 02:58:01.043643 | orchestrator | 2026-04-17 02:58:01 | INFO  | Task ec2a4d41-02c7-4149-9ac2-1248bb449c45 (squid) was prepared for execution. 2026-04-17 02:58:01.043727 | orchestrator | 2026-04-17 02:58:01 | INFO  | It takes a moment until task ec2a4d41-02c7-4149-9ac2-1248bb449c45 (squid) has been started and output is visible here. 2026-04-17 02:59:55.342008 | orchestrator | 2026-04-17 02:59:55.342128 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-17 02:59:55.342137 | orchestrator | 2026-04-17 02:59:55.342144 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-17 02:59:55.342150 | orchestrator | Friday 17 April 2026 02:58:05 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-04-17 02:59:55.342156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 02:59:55.342163 | orchestrator | 2026-04-17 02:59:55.342169 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-17 02:59:55.342174 | orchestrator | Friday 17 April 2026 02:58:05 +0000 (0:00:00.083) 0:00:00.244 ********** 2026-04-17 02:59:55.342180 | orchestrator | ok: [testbed-manager] 2026-04-17 02:59:55.342275 | orchestrator | 2026-04-17 02:59:55.342285 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-17 02:59:55.342293 | orchestrator | Friday 17 April 2026 02:58:06 +0000 (0:00:01.413) 0:00:01.658 ********** 2026-04-17 02:59:55.342303 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-17 02:59:55.342313 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-17 02:59:55.342321 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-17 02:59:55.342330 | orchestrator | 2026-04-17 02:59:55.342339 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-17 02:59:55.342348 | orchestrator | Friday 17 April 2026 02:58:07 +0000 (0:00:01.120) 0:00:02.778 ********** 2026-04-17 02:59:55.342356 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-17 02:59:55.342365 | orchestrator | 2026-04-17 02:59:55.342374 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-17 02:59:55.342382 | orchestrator | Friday 17 April 2026 02:58:08 +0000 (0:00:01.062) 0:00:03.840 ********** 2026-04-17 02:59:55.342392 | orchestrator | ok: [testbed-manager] 2026-04-17 02:59:55.342400 | orchestrator | 2026-04-17 02:59:55.342409 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-17 02:59:55.342418 | orchestrator | Friday 17 April 2026 02:58:09 +0000 (0:00:00.349) 0:00:04.189 ********** 2026-04-17 02:59:55.342428 | orchestrator | changed: [testbed-manager] 2026-04-17 02:59:55.342437 | orchestrator | 2026-04-17 02:59:55.342446 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-17 02:59:55.342455 | orchestrator | Friday 17 April 2026 02:58:10 +0000 (0:00:00.879) 0:00:05.069 ********** 2026-04-17 02:59:55.342464 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-17 02:59:55.342478 | orchestrator | ok: [testbed-manager] 2026-04-17 02:59:55.342487 | orchestrator | 2026-04-17 02:59:55.342497 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-17 02:59:55.342535 | orchestrator | Friday 17 April 2026 02:58:41 +0000 (0:00:31.313) 0:00:36.382 ********** 2026-04-17 02:59:55.342545 | orchestrator | changed: [testbed-manager] 2026-04-17 02:59:55.342554 | orchestrator | 2026-04-17 02:59:55.342562 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-17 02:59:55.342570 | orchestrator | Friday 17 April 2026 02:58:54 +0000 (0:00:12.983) 0:00:49.366 ********** 2026-04-17 02:59:55.342581 | orchestrator | Pausing for 60 seconds 2026-04-17 02:59:55.342590 | orchestrator | changed: [testbed-manager] 2026-04-17 02:59:55.342599 | orchestrator | 2026-04-17 02:59:55.342609 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-17 02:59:55.342619 | orchestrator | Friday 17 April 2026 02:59:54 +0000 (0:01:00.078) 0:01:49.444 ********** 2026-04-17 02:59:55.342628 | orchestrator | ok: [testbed-manager] 2026-04-17 02:59:55.342636 | orchestrator | 2026-04-17 02:59:55.342643 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-17 02:59:55.342649 | orchestrator | Friday 17 April 2026 02:59:54 +0000 (0:00:00.080) 0:01:49.525 ********** 2026-04-17 02:59:55.342656 | orchestrator | changed: [testbed-manager] 2026-04-17 02:59:55.342661 | orchestrator | 2026-04-17 02:59:55.342668 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 02:59:55.342674 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 02:59:55.342681 | orchestrator | 2026-04-17 02:59:55.342687 | orchestrator | 2026-04-17 02:59:55.342693 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 02:59:55.342700 | orchestrator | Friday 17 April 2026 02:59:55 +0000 (0:00:00.599) 0:01:50.124 ********** 2026-04-17 02:59:55.342706 | orchestrator | =============================================================================== 2026-04-17 02:59:55.342728 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-17 02:59:55.342737 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.31s 2026-04-17 02:59:55.342746 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.98s 2026-04-17 02:59:55.342755 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2026-04-17 02:59:55.342764 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2026-04-17 02:59:55.342773 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2026-04-17 02:59:55.342782 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-04-17 02:59:55.342791 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-04-17 02:59:55.342800 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-04-17 02:59:55.342809 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-17 02:59:55.342819 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-04-17 02:59:55.626900 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-17 02:59:55.626986 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-17 02:59:55.679414 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 02:59:55.679482 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-17 02:59:55.687495 | orchestrator | + set -e 2026-04-17 02:59:55.687579 | orchestrator | + NAMESPACE=kolla/release 2026-04-17 02:59:55.687593 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-17 02:59:55.693768 | orchestrator | ++ semver 9.5.0 9.0.0 2026-04-17 02:59:55.763098 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-17 02:59:55.764290 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-17 03:00:07.738546 | orchestrator | 2026-04-17 03:00:07 | INFO  | Task dc6d8270-83b1-42c8-8728-dda70618eb01 (operator) was prepared for execution. 2026-04-17 03:00:07.738614 | orchestrator | 2026-04-17 03:00:07 | INFO  | It takes a moment until task dc6d8270-83b1-42c8-8728-dda70618eb01 (operator) has been started and output is visible here. 2026-04-17 03:00:23.287802 | orchestrator | 2026-04-17 03:00:23.287890 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-17 03:00:23.287899 | orchestrator | 2026-04-17 03:00:23.287905 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 03:00:23.287912 | orchestrator | Friday 17 April 2026 03:00:11 +0000 (0:00:00.139) 0:00:00.139 ********** 2026-04-17 03:00:23.287918 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:00:23.287925 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:00:23.287931 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:00:23.287937 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:00:23.287943 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:00:23.287949 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:00:23.287954 | orchestrator | 2026-04-17 03:00:23.287961 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-17 03:00:23.287967 | orchestrator | Friday 17 April 2026 03:00:14 +0000 (0:00:03.207) 0:00:03.346 ********** 2026-04-17 03:00:23.287973 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:00:23.287979 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:00:23.287997 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:00:23.288003 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:00:23.288009 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:00:23.288015 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:00:23.288020 | orchestrator | 2026-04-17 03:00:23.288026 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-17 03:00:23.288032 | orchestrator | 2026-04-17 03:00:23.288038 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-17 03:00:23.288044 | orchestrator | Friday 17 April 2026 03:00:15 +0000 (0:00:00.779) 0:00:04.126 ********** 2026-04-17 03:00:23.288050 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:00:23.288055 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:00:23.288061 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:00:23.288067 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:00:23.288072 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:00:23.288079 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:00:23.288085 | orchestrator | 2026-04-17 03:00:23.288091 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-17 03:00:23.288097 | orchestrator | Friday 17 April 2026 03:00:15 +0000 (0:00:00.148) 0:00:04.274 ********** 2026-04-17 03:00:23.288102 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:00:23.288108 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:00:23.288114 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:00:23.288119 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:00:23.288125 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:00:23.288131 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:00:23.288136 | orchestrator | 2026-04-17 03:00:23.288142 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-17 03:00:23.288148 | orchestrator | Friday 17 April 2026 03:00:16 +0000 (0:00:00.183) 0:00:04.458 ********** 2026-04-17 03:00:23.288154 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:00:23.288161 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:00:23.288167 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:00:23.288173 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:00:23.288179 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:00:23.288204 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:00:23.288210 | orchestrator | 2026-04-17 03:00:23.288216 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-17 03:00:23.288222 | orchestrator | Friday 17 April 2026 03:00:16 +0000 (0:00:00.604) 0:00:05.062 ********** 2026-04-17 03:00:23.288228 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:00:23.288234 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:00:23.288240 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:00:23.288245 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:00:23.288251 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:00:23.288257 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:00:23.288281 | orchestrator | 2026-04-17 03:00:23.288287 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-17 03:00:23.288293 | orchestrator | Friday 17 April 2026 03:00:17 +0000 (0:00:00.850) 0:00:05.913 ********** 2026-04-17 03:00:23.288299 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-17 03:00:23.288305 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-17 03:00:23.288311 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-17 03:00:23.288317 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-17 03:00:23.288322 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-17 03:00:23.288328 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-17 03:00:23.288334 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-17 03:00:23.288340 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-17 03:00:23.288346 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-17 03:00:23.288353 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-17 03:00:23.288360 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-17 03:00:23.288367 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-17 03:00:23.288373 | orchestrator | 2026-04-17 03:00:23.288380 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-17 03:00:23.288387 | orchestrator | Friday 17 April 2026 03:00:18 +0000 (0:00:01.211) 0:00:07.124 ********** 2026-04-17 03:00:23.288394 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:00:23.288400 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:00:23.288407 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:00:23.288414 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:00:23.288421 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:00:23.288428 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:00:23.288435 | orchestrator | 2026-04-17 03:00:23.288442 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-17 03:00:23.288449 | orchestrator | Friday 17 April 2026 03:00:19 +0000 (0:00:01.184) 0:00:08.309 ********** 2026-04-17 03:00:23.288456 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-17 03:00:23.288463 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-17 03:00:23.288470 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-17 03:00:23.288477 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 03:00:23.288496 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 03:00:23.288503 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 03:00:23.288509 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 03:00:23.288516 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 03:00:23.288523 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 03:00:23.288530 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-17 03:00:23.288536 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-17 03:00:23.288543 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-17 03:00:23.288550 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-17 03:00:23.288556 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-17 03:00:23.288563 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-17 03:00:23.288569 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-17 03:00:23.288576 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-17 03:00:23.288583 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-17 03:00:23.288589 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-17 03:00:23.288596 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-17 03:00:23.288607 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-17 03:00:23.288614 | orchestrator | 2026-04-17 03:00:23.288620 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-17 03:00:23.288628 | orchestrator | Friday 17 April 2026 03:00:21 +0000 (0:00:01.260) 0:00:09.569 ********** 2026-04-17 03:00:23.288634 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:00:23.288641 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:00:23.288648 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:00:23.288655 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:00:23.288661 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:00:23.288667 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:00:23.288674 | orchestrator | 2026-04-17 03:00:23.288680 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-17 03:00:23.288687 | orchestrator | Friday 17 April 2026 03:00:21 +0000 (0:00:00.144) 0:00:09.713 ********** 2026-04-17 03:00:23.288694 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:00:23.288701 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:00:23.288708 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:00:23.288715 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:00:23.288722 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:00:23.288728 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:00:23.288735 | orchestrator | 2026-04-17 03:00:23.288741 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-17 03:00:23.288747 | orchestrator | Friday 17 April 2026 03:00:21 +0000 (0:00:00.184) 0:00:09.898 ********** 2026-04-17 03:00:23.288753 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:00:23.288758 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:00:23.288764 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:00:23.288770 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:00:23.288775 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:00:23.288781 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:00:23.288786 | orchestrator | 2026-04-17 03:00:23.288792 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-17 03:00:23.288798 | orchestrator | Friday 17 April 2026 03:00:22 +0000 (0:00:00.562) 0:00:10.460 ********** 2026-04-17 03:00:23.288804 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:00:23.288810 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:00:23.288815 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:00:23.288821 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:00:23.288834 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:00:23.288840 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:00:23.288846 | orchestrator | 2026-04-17 03:00:23.288851 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-17 03:00:23.288857 | orchestrator | Friday 17 April 2026 03:00:22 +0000 (0:00:00.190) 0:00:10.650 ********** 2026-04-17 03:00:23.288863 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 03:00:23.288869 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:00:23.288875 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 03:00:23.288880 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:00:23.288886 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 03:00:23.288892 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:00:23.288898 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-17 03:00:23.288903 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:00:23.288909 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-17 03:00:23.288915 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:00:23.288921 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 03:00:23.288926 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:00:23.288932 | orchestrator | 2026-04-17 03:00:23.288938 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-17 03:00:23.288944 | orchestrator | Friday 17 April 2026 03:00:22 +0000 (0:00:00.744) 0:00:11.395 ********** 2026-04-17 03:00:23.288954 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:00:23.288960 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:00:23.288965 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:00:23.288971 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:00:23.288977 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:00:23.288982 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:00:23.288988 | orchestrator | 2026-04-17 03:00:23.288994 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-17 03:00:23.289000 | orchestrator | Friday 17 April 2026 03:00:23 +0000 (0:00:00.159) 0:00:11.554 ********** 2026-04-17 03:00:23.289006 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:00:23.289011 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:00:23.289017 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:00:23.289023 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:00:23.289033 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:00:24.635502 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:00:24.635631 | orchestrator | 2026-04-17 03:00:24.635649 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-17 03:00:24.635663 | orchestrator | Friday 17 April 2026 03:00:23 +0000 (0:00:00.158) 0:00:11.713 ********** 2026-04-17 03:00:24.635674 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:00:24.635684 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:00:24.635736 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:00:24.635749 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:00:24.635760 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:00:24.635770 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:00:24.635781 | orchestrator | 2026-04-17 03:00:24.635790 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-17 03:00:24.635800 | orchestrator | Friday 17 April 2026 03:00:23 +0000 (0:00:00.193) 0:00:11.906 ********** 2026-04-17 03:00:24.635811 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:00:24.635840 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:00:24.635850 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:00:24.635860 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:00:24.635870 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:00:24.635879 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:00:24.635890 | orchestrator | 2026-04-17 03:00:24.635899 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-17 03:00:24.635909 | orchestrator | Friday 17 April 2026 03:00:24 +0000 (0:00:00.673) 0:00:12.580 ********** 2026-04-17 03:00:24.635919 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:00:24.635928 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:00:24.635940 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:00:24.635950 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:00:24.635960 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:00:24.635980 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:00:24.635991 | orchestrator | 2026-04-17 03:00:24.636001 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:00:24.636013 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 03:00:24.636025 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 03:00:24.636035 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 03:00:24.636045 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 03:00:24.636057 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 03:00:24.636089 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 03:00:24.636101 | orchestrator | 2026-04-17 03:00:24.636111 | orchestrator | 2026-04-17 03:00:24.636122 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:00:24.636133 | orchestrator | Friday 17 April 2026 03:00:24 +0000 (0:00:00.225) 0:00:12.805 ********** 2026-04-17 03:00:24.636143 | orchestrator | =============================================================================== 2026-04-17 03:00:24.636153 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2026-04-17 03:00:24.636164 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2026-04-17 03:00:24.636176 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2026-04-17 03:00:24.636211 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-04-17 03:00:24.636223 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.85s 2026-04-17 03:00:24.636233 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-04-17 03:00:24.636243 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2026-04-17 03:00:24.636253 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-04-17 03:00:24.636264 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2026-04-17 03:00:24.636274 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-04-17 03:00:24.636284 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-04-17 03:00:24.636293 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.19s 2026-04-17 03:00:24.636304 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-04-17 03:00:24.636314 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-04-17 03:00:24.636324 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-04-17 03:00:24.636334 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-04-17 03:00:24.636345 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-04-17 03:00:24.636355 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2026-04-17 03:00:24.636366 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-17 03:00:24.917478 | orchestrator | + osism apply --environment custom facts 2026-04-17 03:00:26.775537 | orchestrator | 2026-04-17 03:00:26 | INFO  | Trying to run play facts in environment custom 2026-04-17 03:00:36.968873 | orchestrator | 2026-04-17 03:00:36 | INFO  | Task d4641cec-5693-4b11-aa43-b6e95a5f2e73 (facts) was prepared for execution. 2026-04-17 03:00:36.968967 | orchestrator | 2026-04-17 03:00:36 | INFO  | It takes a moment until task d4641cec-5693-4b11-aa43-b6e95a5f2e73 (facts) has been started and output is visible here. 2026-04-17 03:01:18.572762 | orchestrator | 2026-04-17 03:01:18.572937 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-17 03:01:18.572957 | orchestrator | 2026-04-17 03:01:18.572967 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-17 03:01:18.572977 | orchestrator | Friday 17 April 2026 03:00:40 +0000 (0:00:00.081) 0:00:00.081 ********** 2026-04-17 03:01:18.572987 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:18.572997 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:18.573006 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:18.573015 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:18.573024 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:18.573033 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:18.573064 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:18.573074 | orchestrator | 2026-04-17 03:01:18.573083 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-17 03:01:18.573092 | orchestrator | Friday 17 April 2026 03:00:42 +0000 (0:00:01.349) 0:00:01.431 ********** 2026-04-17 03:01:18.573101 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:18.573109 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:18.573118 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:18.573127 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:18.573135 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:18.573144 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:18.573152 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:18.573161 | orchestrator | 2026-04-17 03:01:18.573170 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-17 03:01:18.573178 | orchestrator | 2026-04-17 03:01:18.573267 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-17 03:01:18.573278 | orchestrator | Friday 17 April 2026 03:00:43 +0000 (0:00:01.124) 0:00:02.555 ********** 2026-04-17 03:01:18.573287 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:18.573297 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:18.573308 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:18.573318 | orchestrator | 2026-04-17 03:01:18.573328 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-17 03:01:18.573339 | orchestrator | Friday 17 April 2026 03:00:43 +0000 (0:00:00.087) 0:00:02.643 ********** 2026-04-17 03:01:18.573350 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:18.573360 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:18.573369 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:18.573379 | orchestrator | 2026-04-17 03:01:18.573389 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-17 03:01:18.573399 | orchestrator | Friday 17 April 2026 03:00:43 +0000 (0:00:00.189) 0:00:02.832 ********** 2026-04-17 03:01:18.573409 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:18.573419 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:18.573430 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:18.573440 | orchestrator | 2026-04-17 03:01:18.573450 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-17 03:01:18.573460 | orchestrator | Friday 17 April 2026 03:00:43 +0000 (0:00:00.196) 0:00:03.029 ********** 2026-04-17 03:01:18.573472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:01:18.573483 | orchestrator | 2026-04-17 03:01:18.573493 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-17 03:01:18.573504 | orchestrator | Friday 17 April 2026 03:00:44 +0000 (0:00:00.142) 0:00:03.171 ********** 2026-04-17 03:01:18.573514 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:18.573523 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:18.573534 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:18.573544 | orchestrator | 2026-04-17 03:01:18.573553 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-17 03:01:18.573563 | orchestrator | Friday 17 April 2026 03:00:44 +0000 (0:00:00.481) 0:00:03.652 ********** 2026-04-17 03:01:18.573574 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:01:18.573584 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:01:18.573594 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:01:18.573605 | orchestrator | 2026-04-17 03:01:18.573615 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-17 03:01:18.573625 | orchestrator | Friday 17 April 2026 03:00:44 +0000 (0:00:00.129) 0:00:03.781 ********** 2026-04-17 03:01:18.573635 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:18.573646 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:18.573655 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:18.573676 | orchestrator | 2026-04-17 03:01:18.573686 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-17 03:01:18.573702 | orchestrator | Friday 17 April 2026 03:00:45 +0000 (0:00:01.130) 0:00:04.912 ********** 2026-04-17 03:01:18.573711 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:18.573719 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:18.573728 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:18.573737 | orchestrator | 2026-04-17 03:01:18.573745 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-17 03:01:18.573800 | orchestrator | Friday 17 April 2026 03:00:46 +0000 (0:00:00.435) 0:00:05.348 ********** 2026-04-17 03:01:18.573810 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:18.573819 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:18.573828 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:18.573836 | orchestrator | 2026-04-17 03:01:18.573845 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-17 03:01:18.573854 | orchestrator | Friday 17 April 2026 03:00:47 +0000 (0:00:01.013) 0:00:06.362 ********** 2026-04-17 03:01:18.573862 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:18.573871 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:18.573880 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:18.573888 | orchestrator | 2026-04-17 03:01:18.573897 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-17 03:01:18.573906 | orchestrator | Friday 17 April 2026 03:01:02 +0000 (0:00:14.955) 0:00:21.317 ********** 2026-04-17 03:01:18.573915 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:01:18.573923 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:01:18.573932 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:01:18.573941 | orchestrator | 2026-04-17 03:01:18.573950 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-17 03:01:18.573977 | orchestrator | Friday 17 April 2026 03:01:02 +0000 (0:00:00.071) 0:00:21.388 ********** 2026-04-17 03:01:18.573987 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:18.573995 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:18.574004 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:18.574013 | orchestrator | 2026-04-17 03:01:18.574083 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-17 03:01:18.574093 | orchestrator | Friday 17 April 2026 03:01:09 +0000 (0:00:07.488) 0:00:28.877 ********** 2026-04-17 03:01:18.574102 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:18.574111 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:18.574119 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:18.574128 | orchestrator | 2026-04-17 03:01:18.574137 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-17 03:01:18.574146 | orchestrator | Friday 17 April 2026 03:01:10 +0000 (0:00:00.483) 0:00:29.360 ********** 2026-04-17 03:01:18.574154 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-17 03:01:18.574164 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-17 03:01:18.574172 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-17 03:01:18.574181 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-17 03:01:18.574267 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-17 03:01:18.574276 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-17 03:01:18.574285 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-17 03:01:18.574293 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-17 03:01:18.574302 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-17 03:01:18.574310 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-17 03:01:18.574319 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-17 03:01:18.574328 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-17 03:01:18.574336 | orchestrator | 2026-04-17 03:01:18.574345 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-17 03:01:18.574361 | orchestrator | Friday 17 April 2026 03:01:13 +0000 (0:00:03.344) 0:00:32.705 ********** 2026-04-17 03:01:18.574370 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:18.574378 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:18.574387 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:18.574396 | orchestrator | 2026-04-17 03:01:18.574404 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 03:01:18.574413 | orchestrator | 2026-04-17 03:01:18.574422 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 03:01:18.574430 | orchestrator | Friday 17 April 2026 03:01:14 +0000 (0:00:01.368) 0:00:34.073 ********** 2026-04-17 03:01:18.574439 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:18.574448 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:18.574456 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:18.574465 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:18.574473 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:18.574482 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:18.574491 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:18.574499 | orchestrator | 2026-04-17 03:01:18.574508 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:01:18.574517 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:01:18.574527 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:01:18.574537 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:01:18.574545 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:01:18.574554 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:01:18.574563 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:01:18.574572 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:01:18.574580 | orchestrator | 2026-04-17 03:01:18.574589 | orchestrator | 2026-04-17 03:01:18.574598 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:01:18.574607 | orchestrator | Friday 17 April 2026 03:01:18 +0000 (0:00:03.638) 0:00:37.711 ********** 2026-04-17 03:01:18.574615 | orchestrator | =============================================================================== 2026-04-17 03:01:18.574624 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.96s 2026-04-17 03:01:18.574633 | orchestrator | Install required packages (Debian) -------------------------------------- 7.49s 2026-04-17 03:01:18.574641 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.64s 2026-04-17 03:01:18.574650 | orchestrator | Copy fact files --------------------------------------------------------- 3.34s 2026-04-17 03:01:18.574659 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2026-04-17 03:01:18.574667 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-04-17 03:01:18.574683 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.13s 2026-04-17 03:01:18.797538 | orchestrator | Copy fact file ---------------------------------------------------------- 1.12s 2026-04-17 03:01:18.797610 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.01s 2026-04-17 03:01:18.797631 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-04-17 03:01:18.797650 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.48s 2026-04-17 03:01:18.797656 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-04-17 03:01:18.797664 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-17 03:01:18.797671 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-04-17 03:01:18.797679 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-04-17 03:01:18.797688 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-04-17 03:01:18.797695 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-04-17 03:01:18.797703 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.07s 2026-04-17 03:01:19.108535 | orchestrator | + osism apply bootstrap 2026-04-17 03:01:31.211945 | orchestrator | 2026-04-17 03:01:31 | INFO  | Task e0c0d9dd-eada-424f-98c5-65c4ee9c4fcf (bootstrap) was prepared for execution. 2026-04-17 03:01:31.212076 | orchestrator | 2026-04-17 03:01:31 | INFO  | It takes a moment until task e0c0d9dd-eada-424f-98c5-65c4ee9c4fcf (bootstrap) has been started and output is visible here. 2026-04-17 03:01:46.945498 | orchestrator | 2026-04-17 03:01:46.945638 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-17 03:01:46.945665 | orchestrator | 2026-04-17 03:01:46.945683 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-17 03:01:46.945702 | orchestrator | Friday 17 April 2026 03:01:35 +0000 (0:00:00.150) 0:00:00.150 ********** 2026-04-17 03:01:46.945720 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:46.945738 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:46.945757 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:46.945774 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:46.945790 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:46.945805 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:46.945819 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:46.945835 | orchestrator | 2026-04-17 03:01:46.945850 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 03:01:46.945866 | orchestrator | 2026-04-17 03:01:46.945881 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 03:01:46.945897 | orchestrator | Friday 17 April 2026 03:01:35 +0000 (0:00:00.251) 0:00:00.402 ********** 2026-04-17 03:01:46.945912 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:46.945927 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:46.945941 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:46.945955 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:46.945971 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:46.945987 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:46.946003 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:46.946093 | orchestrator | 2026-04-17 03:01:46.946111 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-17 03:01:46.946123 | orchestrator | 2026-04-17 03:01:46.946134 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 03:01:46.946146 | orchestrator | Friday 17 April 2026 03:01:39 +0000 (0:00:03.666) 0:00:04.069 ********** 2026-04-17 03:01:46.946159 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-17 03:01:46.946170 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-17 03:01:46.946180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-17 03:01:46.946219 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-17 03:01:46.946236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:01:46.946252 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-17 03:01:46.946267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:01:46.946285 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-17 03:01:46.946336 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-17 03:01:46.946346 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-17 03:01:46.946356 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-17 03:01:46.946366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:01:46.946376 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-17 03:01:46.946386 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 03:01:46.946395 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 03:01:46.946406 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:01:46.946416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 03:01:46.946425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-17 03:01:46.946432 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-17 03:01:46.946440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 03:01:46.946448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 03:01:46.946456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 03:01:46.946463 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 03:01:46.946471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 03:01:46.946479 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 03:01:46.946491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 03:01:46.946507 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 03:01:46.946526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 03:01:46.946538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 03:01:46.946551 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 03:01:46.946564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 03:01:46.946576 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:01:46.946588 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 03:01:46.946600 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-17 03:01:46.946611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 03:01:46.946623 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:01:46.946635 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 03:01:46.946648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 03:01:46.946660 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 03:01:46.946673 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 03:01:46.946686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 03:01:46.946699 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 03:01:46.946713 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 03:01:46.946725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 03:01:46.946738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 03:01:46.946750 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 03:01:46.946788 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:01:46.946801 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 03:01:46.946815 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 03:01:46.946828 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:01:46.946863 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 03:01:46.946877 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:01:46.946889 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 03:01:46.946901 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 03:01:46.946921 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 03:01:46.946929 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:01:46.946937 | orchestrator | 2026-04-17 03:01:46.946945 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-17 03:01:46.946954 | orchestrator | 2026-04-17 03:01:46.946962 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-17 03:01:46.946970 | orchestrator | Friday 17 April 2026 03:01:39 +0000 (0:00:00.438) 0:00:04.507 ********** 2026-04-17 03:01:46.946978 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:46.946986 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:46.946994 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:46.947001 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:46.947009 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:46.947017 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:46.947025 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:46.947033 | orchestrator | 2026-04-17 03:01:46.947040 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-17 03:01:46.947048 | orchestrator | Friday 17 April 2026 03:01:40 +0000 (0:00:01.210) 0:00:05.717 ********** 2026-04-17 03:01:46.947056 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:46.947064 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:46.947072 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:46.947079 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:46.947087 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:46.947095 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:46.947102 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:46.947110 | orchestrator | 2026-04-17 03:01:46.947118 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-17 03:01:46.947126 | orchestrator | Friday 17 April 2026 03:01:42 +0000 (0:00:01.207) 0:00:06.924 ********** 2026-04-17 03:01:46.947134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:01:46.947144 | orchestrator | 2026-04-17 03:01:46.947152 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-17 03:01:46.947160 | orchestrator | Friday 17 April 2026 03:01:42 +0000 (0:00:00.307) 0:00:07.232 ********** 2026-04-17 03:01:46.947168 | orchestrator | changed: [testbed-manager] 2026-04-17 03:01:46.947176 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:46.947183 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:46.947271 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:46.947280 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:46.947288 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:46.947296 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:46.947304 | orchestrator | 2026-04-17 03:01:46.947312 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-17 03:01:46.947320 | orchestrator | Friday 17 April 2026 03:01:44 +0000 (0:00:02.056) 0:00:09.289 ********** 2026-04-17 03:01:46.947327 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:01:46.947337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:01:46.947347 | orchestrator | 2026-04-17 03:01:46.947355 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-17 03:01:46.947363 | orchestrator | Friday 17 April 2026 03:01:44 +0000 (0:00:00.269) 0:00:09.559 ********** 2026-04-17 03:01:46.947371 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:46.947378 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:46.947386 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:46.947394 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:46.947402 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:46.947410 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:46.947439 | orchestrator | 2026-04-17 03:01:46.947461 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-17 03:01:46.947470 | orchestrator | Friday 17 April 2026 03:01:45 +0000 (0:00:01.044) 0:00:10.603 ********** 2026-04-17 03:01:46.947478 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:01:46.947485 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:46.947493 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:46.947501 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:46.947517 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:46.947525 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:46.947533 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:46.947541 | orchestrator | 2026-04-17 03:01:46.947549 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-17 03:01:46.947557 | orchestrator | Friday 17 April 2026 03:01:46 +0000 (0:00:00.547) 0:00:11.150 ********** 2026-04-17 03:01:46.947565 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:01:46.947572 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:01:46.947580 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:01:46.947588 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:01:46.947596 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:01:46.947604 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:01:46.947611 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:46.947619 | orchestrator | 2026-04-17 03:01:46.947628 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-17 03:01:46.947637 | orchestrator | Friday 17 April 2026 03:01:46 +0000 (0:00:00.421) 0:00:11.572 ********** 2026-04-17 03:01:46.947645 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:01:46.947653 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:01:46.947669 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:01:58.518970 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:01:58.519120 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:01:58.519146 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:01:58.519166 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:01:58.519186 | orchestrator | 2026-04-17 03:01:58.519285 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-17 03:01:58.519304 | orchestrator | Friday 17 April 2026 03:01:47 +0000 (0:00:00.235) 0:00:11.808 ********** 2026-04-17 03:01:58.519323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:01:58.519362 | orchestrator | 2026-04-17 03:01:58.519380 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-17 03:01:58.519398 | orchestrator | Friday 17 April 2026 03:01:47 +0000 (0:00:00.269) 0:00:12.077 ********** 2026-04-17 03:01:58.519415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:01:58.519433 | orchestrator | 2026-04-17 03:01:58.519450 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-17 03:01:58.519467 | orchestrator | Friday 17 April 2026 03:01:47 +0000 (0:00:00.275) 0:00:12.353 ********** 2026-04-17 03:01:58.519484 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.519503 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.519521 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:58.519538 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.519555 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:58.519571 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.519587 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:58.519603 | orchestrator | 2026-04-17 03:01:58.519620 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-17 03:01:58.519636 | orchestrator | Friday 17 April 2026 03:01:49 +0000 (0:00:01.442) 0:00:13.796 ********** 2026-04-17 03:01:58.519683 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:01:58.519704 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:01:58.519721 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:01:58.519738 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:01:58.519754 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:01:58.519772 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:01:58.519788 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:01:58.519805 | orchestrator | 2026-04-17 03:01:58.519821 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-17 03:01:58.519838 | orchestrator | Friday 17 April 2026 03:01:49 +0000 (0:00:00.274) 0:00:14.071 ********** 2026-04-17 03:01:58.519854 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.519871 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.519887 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.519903 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.519919 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:58.519934 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:58.519950 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:58.519966 | orchestrator | 2026-04-17 03:01:58.519983 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-17 03:01:58.519994 | orchestrator | Friday 17 April 2026 03:01:49 +0000 (0:00:00.505) 0:00:14.576 ********** 2026-04-17 03:01:58.520004 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:01:58.520014 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:01:58.520023 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:01:58.520033 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:01:58.520042 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:01:58.520052 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:01:58.520062 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:01:58.520071 | orchestrator | 2026-04-17 03:01:58.520081 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-17 03:01:58.520092 | orchestrator | Friday 17 April 2026 03:01:50 +0000 (0:00:00.217) 0:00:14.794 ********** 2026-04-17 03:01:58.520102 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.520111 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:58.520121 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:58.520130 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:58.520139 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:58.520159 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:58.520169 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:58.520179 | orchestrator | 2026-04-17 03:01:58.520188 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-17 03:01:58.520220 | orchestrator | Friday 17 April 2026 03:01:50 +0000 (0:00:00.543) 0:00:15.338 ********** 2026-04-17 03:01:58.520230 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.520240 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:58.520249 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:58.520258 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:58.520268 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:58.520277 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:58.520287 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:58.520296 | orchestrator | 2026-04-17 03:01:58.520306 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-17 03:01:58.520315 | orchestrator | Friday 17 April 2026 03:01:51 +0000 (0:00:01.085) 0:00:16.424 ********** 2026-04-17 03:01:58.520325 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.520335 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.520344 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.520354 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:58.520363 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:58.520372 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.520382 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:58.520391 | orchestrator | 2026-04-17 03:01:58.520412 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-17 03:01:58.520420 | orchestrator | Friday 17 April 2026 03:01:52 +0000 (0:00:00.992) 0:00:17.417 ********** 2026-04-17 03:01:58.520448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:01:58.520457 | orchestrator | 2026-04-17 03:01:58.520465 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-17 03:01:58.520473 | orchestrator | Friday 17 April 2026 03:01:52 +0000 (0:00:00.272) 0:00:17.689 ********** 2026-04-17 03:01:58.520481 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:01:58.520489 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:01:58.520496 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:58.520504 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:58.520512 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:58.520520 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:01:58.520528 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:01:58.520535 | orchestrator | 2026-04-17 03:01:58.520543 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-17 03:01:58.520551 | orchestrator | Friday 17 April 2026 03:01:54 +0000 (0:00:01.226) 0:00:18.916 ********** 2026-04-17 03:01:58.520559 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.520567 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.520575 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.520583 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.520590 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:58.520598 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:58.520606 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:58.520614 | orchestrator | 2026-04-17 03:01:58.520622 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-17 03:01:58.520630 | orchestrator | Friday 17 April 2026 03:01:54 +0000 (0:00:00.203) 0:00:19.119 ********** 2026-04-17 03:01:58.520638 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.520645 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.520653 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.520661 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.520668 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:58.520676 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:58.520684 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:58.520692 | orchestrator | 2026-04-17 03:01:58.520699 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-17 03:01:58.520707 | orchestrator | Friday 17 April 2026 03:01:54 +0000 (0:00:00.225) 0:00:19.344 ********** 2026-04-17 03:01:58.520715 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.520723 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.520730 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.520738 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.520746 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:58.520753 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:58.520761 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:58.520769 | orchestrator | 2026-04-17 03:01:58.520777 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-17 03:01:58.520784 | orchestrator | Friday 17 April 2026 03:01:54 +0000 (0:00:00.198) 0:00:19.543 ********** 2026-04-17 03:01:58.520803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:01:58.520821 | orchestrator | 2026-04-17 03:01:58.520829 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-17 03:01:58.520837 | orchestrator | Friday 17 April 2026 03:01:55 +0000 (0:00:00.244) 0:00:19.787 ********** 2026-04-17 03:01:58.520845 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.520858 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.520866 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.520874 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.520882 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:58.520890 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:58.520898 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:58.520905 | orchestrator | 2026-04-17 03:01:58.520913 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-17 03:01:58.520921 | orchestrator | Friday 17 April 2026 03:01:55 +0000 (0:00:00.520) 0:00:20.308 ********** 2026-04-17 03:01:58.520929 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:01:58.520937 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:01:58.520945 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:01:58.520953 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:01:58.520961 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:01:58.520968 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:01:58.520976 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:01:58.520984 | orchestrator | 2026-04-17 03:01:58.520992 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-17 03:01:58.521088 | orchestrator | Friday 17 April 2026 03:01:55 +0000 (0:00:00.204) 0:00:20.513 ********** 2026-04-17 03:01:58.521100 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.521108 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.521116 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.521124 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:01:58.521132 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:01:58.521140 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:01:58.521148 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.521156 | orchestrator | 2026-04-17 03:01:58.521164 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-17 03:01:58.521172 | orchestrator | Friday 17 April 2026 03:01:56 +0000 (0:00:01.114) 0:00:21.628 ********** 2026-04-17 03:01:58.521180 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.521188 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.521215 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.521223 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:01:58.521231 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.521238 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:01:58.521255 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:01:58.521263 | orchestrator | 2026-04-17 03:01:58.521272 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-17 03:01:58.521280 | orchestrator | Friday 17 April 2026 03:01:57 +0000 (0:00:00.548) 0:00:22.176 ********** 2026-04-17 03:01:58.521288 | orchestrator | ok: [testbed-manager] 2026-04-17 03:01:58.521296 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:01:58.521303 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:01:58.521311 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:01:58.521327 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:02:38.253182 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:02:38.253390 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:02:38.253413 | orchestrator | 2026-04-17 03:02:38.253437 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-17 03:02:38.253454 | orchestrator | Friday 17 April 2026 03:01:58 +0000 (0:00:01.104) 0:00:23.280 ********** 2026-04-17 03:02:38.253467 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.253481 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.253494 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.253507 | orchestrator | changed: [testbed-manager] 2026-04-17 03:02:38.253521 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:02:38.253535 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:02:38.253550 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:02:38.253563 | orchestrator | 2026-04-17 03:02:38.253577 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-17 03:02:38.253586 | orchestrator | Friday 17 April 2026 03:02:14 +0000 (0:00:15.758) 0:00:39.039 ********** 2026-04-17 03:02:38.253617 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.253625 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.253633 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.253640 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.253648 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.253655 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.253663 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.253671 | orchestrator | 2026-04-17 03:02:38.253679 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-17 03:02:38.253687 | orchestrator | Friday 17 April 2026 03:02:14 +0000 (0:00:00.198) 0:00:39.238 ********** 2026-04-17 03:02:38.253695 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.253704 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.253713 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.253722 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.253731 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.253741 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.253749 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.253758 | orchestrator | 2026-04-17 03:02:38.253768 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-17 03:02:38.253777 | orchestrator | Friday 17 April 2026 03:02:14 +0000 (0:00:00.214) 0:00:39.452 ********** 2026-04-17 03:02:38.253787 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.253795 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.253804 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.253813 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.253822 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.253831 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.253841 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.253850 | orchestrator | 2026-04-17 03:02:38.253860 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-17 03:02:38.253869 | orchestrator | Friday 17 April 2026 03:02:14 +0000 (0:00:00.193) 0:00:39.646 ********** 2026-04-17 03:02:38.253881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:02:38.253893 | orchestrator | 2026-04-17 03:02:38.253902 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-17 03:02:38.253911 | orchestrator | Friday 17 April 2026 03:02:15 +0000 (0:00:00.299) 0:00:39.945 ********** 2026-04-17 03:02:38.253920 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.253928 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.253937 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.253946 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.253955 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.253964 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.253973 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.253983 | orchestrator | 2026-04-17 03:02:38.253992 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-17 03:02:38.254001 | orchestrator | Friday 17 April 2026 03:02:16 +0000 (0:00:01.701) 0:00:41.646 ********** 2026-04-17 03:02:38.254010 | orchestrator | changed: [testbed-manager] 2026-04-17 03:02:38.254136 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:02:38.254152 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:02:38.254165 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:02:38.254178 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:02:38.254189 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:02:38.254226 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:02:38.254240 | orchestrator | 2026-04-17 03:02:38.254253 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-17 03:02:38.254281 | orchestrator | Friday 17 April 2026 03:02:17 +0000 (0:00:01.012) 0:00:42.658 ********** 2026-04-17 03:02:38.254295 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.254308 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.254321 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.254347 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.254359 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.254373 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.254381 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.254389 | orchestrator | 2026-04-17 03:02:38.254397 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-17 03:02:38.254405 | orchestrator | Friday 17 April 2026 03:02:18 +0000 (0:00:00.798) 0:00:43.457 ********** 2026-04-17 03:02:38.254413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:02:38.254423 | orchestrator | 2026-04-17 03:02:38.254431 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-17 03:02:38.254440 | orchestrator | Friday 17 April 2026 03:02:18 +0000 (0:00:00.270) 0:00:43.727 ********** 2026-04-17 03:02:38.254448 | orchestrator | changed: [testbed-manager] 2026-04-17 03:02:38.254455 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:02:38.254463 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:02:38.254471 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:02:38.254484 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:02:38.254504 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:02:38.254518 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:02:38.254530 | orchestrator | 2026-04-17 03:02:38.254566 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-17 03:02:38.254580 | orchestrator | Friday 17 April 2026 03:02:20 +0000 (0:00:01.102) 0:00:44.830 ********** 2026-04-17 03:02:38.254594 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:02:38.254607 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:02:38.254620 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:02:38.254632 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:02:38.254643 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:02:38.254655 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:02:38.254667 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:02:38.254680 | orchestrator | 2026-04-17 03:02:38.254694 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-17 03:02:38.254707 | orchestrator | Friday 17 April 2026 03:02:20 +0000 (0:00:00.207) 0:00:45.037 ********** 2026-04-17 03:02:38.254720 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:02:38.254733 | orchestrator | 2026-04-17 03:02:38.254745 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-17 03:02:38.254757 | orchestrator | Friday 17 April 2026 03:02:20 +0000 (0:00:00.301) 0:00:45.338 ********** 2026-04-17 03:02:38.254769 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.254781 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.254794 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.254806 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.254819 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.254833 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.254845 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.254858 | orchestrator | 2026-04-17 03:02:38.254868 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-17 03:02:38.254876 | orchestrator | Friday 17 April 2026 03:02:22 +0000 (0:00:01.770) 0:00:47.109 ********** 2026-04-17 03:02:38.254884 | orchestrator | changed: [testbed-manager] 2026-04-17 03:02:38.254892 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:02:38.254899 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:02:38.254907 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:02:38.254915 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:02:38.254922 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:02:38.254930 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:02:38.254948 | orchestrator | 2026-04-17 03:02:38.254956 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-17 03:02:38.254964 | orchestrator | Friday 17 April 2026 03:02:23 +0000 (0:00:01.094) 0:00:48.203 ********** 2026-04-17 03:02:38.254972 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:02:38.254980 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:02:38.254988 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:02:38.254995 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:02:38.255003 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:02:38.255011 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:02:38.255019 | orchestrator | changed: [testbed-manager] 2026-04-17 03:02:38.255027 | orchestrator | 2026-04-17 03:02:38.255034 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-17 03:02:38.255042 | orchestrator | Friday 17 April 2026 03:02:35 +0000 (0:00:11.682) 0:00:59.886 ********** 2026-04-17 03:02:38.255050 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.255058 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.255066 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.255073 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.255081 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.255089 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.255097 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.255104 | orchestrator | 2026-04-17 03:02:38.255112 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-17 03:02:38.255120 | orchestrator | Friday 17 April 2026 03:02:36 +0000 (0:00:01.541) 0:01:01.427 ********** 2026-04-17 03:02:38.255128 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.255136 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.255144 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.255151 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.255159 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.255166 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.255174 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.255182 | orchestrator | 2026-04-17 03:02:38.255190 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-17 03:02:38.255243 | orchestrator | Friday 17 April 2026 03:02:37 +0000 (0:00:00.886) 0:01:02.314 ********** 2026-04-17 03:02:38.255258 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.255271 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.255283 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.255296 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.255305 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.255313 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.255320 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.255328 | orchestrator | 2026-04-17 03:02:38.255336 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-17 03:02:38.255344 | orchestrator | Friday 17 April 2026 03:02:37 +0000 (0:00:00.190) 0:01:02.504 ********** 2026-04-17 03:02:38.255352 | orchestrator | ok: [testbed-manager] 2026-04-17 03:02:38.255359 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:02:38.255367 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:02:38.255374 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:02:38.255382 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:02:38.255390 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:02:38.255397 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:02:38.255405 | orchestrator | 2026-04-17 03:02:38.255413 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-17 03:02:38.255420 | orchestrator | Friday 17 April 2026 03:02:37 +0000 (0:00:00.202) 0:01:02.707 ********** 2026-04-17 03:02:38.255429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:02:38.255438 | orchestrator | 2026-04-17 03:02:38.255456 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-17 03:05:12.242794 | orchestrator | Friday 17 April 2026 03:02:38 +0000 (0:00:00.309) 0:01:03.016 ********** 2026-04-17 03:05:12.242888 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:12.242900 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:12.242909 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:12.242917 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:12.242925 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:12.242933 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:12.242941 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:12.242949 | orchestrator | 2026-04-17 03:05:12.242957 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-17 03:05:12.242966 | orchestrator | Friday 17 April 2026 03:02:40 +0000 (0:00:01.777) 0:01:04.794 ********** 2026-04-17 03:05:12.242974 | orchestrator | changed: [testbed-manager] 2026-04-17 03:05:12.242983 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:05:12.242991 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:05:12.242999 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:05:12.243007 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:05:12.243015 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:05:12.243023 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:05:12.243030 | orchestrator | 2026-04-17 03:05:12.243038 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-17 03:05:12.243047 | orchestrator | Friday 17 April 2026 03:02:40 +0000 (0:00:00.560) 0:01:05.355 ********** 2026-04-17 03:05:12.243055 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:12.243063 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:12.243070 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:12.243078 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:12.243086 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:12.243094 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:12.243101 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:12.243109 | orchestrator | 2026-04-17 03:05:12.243118 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-17 03:05:12.243126 | orchestrator | Friday 17 April 2026 03:02:40 +0000 (0:00:00.210) 0:01:05.565 ********** 2026-04-17 03:05:12.243134 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:12.243142 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:12.243150 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:12.243157 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:12.243165 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:12.243173 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:12.243180 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:12.243188 | orchestrator | 2026-04-17 03:05:12.243196 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-17 03:05:12.243204 | orchestrator | Friday 17 April 2026 03:02:42 +0000 (0:00:01.368) 0:01:06.933 ********** 2026-04-17 03:05:12.243212 | orchestrator | changed: [testbed-manager] 2026-04-17 03:05:12.243246 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:05:12.243257 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:05:12.243267 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:05:12.243277 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:05:12.243286 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:05:12.243295 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:12.243304 | orchestrator | 2026-04-17 03:05:12.243317 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-17 03:05:12.243327 | orchestrator | Friday 17 April 2026 03:02:45 +0000 (0:00:02.966) 0:01:09.900 ********** 2026-04-17 03:05:12.243336 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:12.243346 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:12.243355 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:12.243364 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:12.243373 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:12.243383 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:12.243392 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:05:12.243401 | orchestrator | 2026-04-17 03:05:12.243432 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-17 03:05:12.243442 | orchestrator | Friday 17 April 2026 03:03:01 +0000 (0:00:16.763) 0:01:26.663 ********** 2026-04-17 03:05:12.243451 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:12.243460 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:12.243472 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:12.243485 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:12.243498 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:12.243511 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:12.243525 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:12.243537 | orchestrator | 2026-04-17 03:05:12.243550 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-17 03:05:12.243563 | orchestrator | Friday 17 April 2026 03:03:42 +0000 (0:00:40.484) 0:02:07.148 ********** 2026-04-17 03:05:12.243577 | orchestrator | changed: [testbed-manager] 2026-04-17 03:05:12.243591 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:05:12.243605 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:05:12.243619 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:05:12.243632 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:05:12.243645 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:05:12.243658 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:05:12.243671 | orchestrator | 2026-04-17 03:05:12.243684 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-17 03:05:12.243692 | orchestrator | Friday 17 April 2026 03:04:58 +0000 (0:01:16.398) 0:03:23.547 ********** 2026-04-17 03:05:12.243700 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:12.243708 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:12.243716 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:12.243724 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:12.243732 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:12.243739 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:12.243747 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:12.243755 | orchestrator | 2026-04-17 03:05:12.243763 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-17 03:05:12.243771 | orchestrator | Friday 17 April 2026 03:05:00 +0000 (0:00:01.751) 0:03:25.299 ********** 2026-04-17 03:05:12.243779 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:12.243787 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:12.243794 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:12.243802 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:12.243810 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:12.243817 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:12.243825 | orchestrator | changed: [testbed-manager] 2026-04-17 03:05:12.243833 | orchestrator | 2026-04-17 03:05:12.243841 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-17 03:05:12.243849 | orchestrator | Friday 17 April 2026 03:05:11 +0000 (0:00:10.664) 0:03:35.964 ********** 2026-04-17 03:05:12.243890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-17 03:05:12.243918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-17 03:05:12.243942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-17 03:05:12.243953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-17 03:05:12.243961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-17 03:05:12.243969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-17 03:05:12.243977 | orchestrator | 2026-04-17 03:05:12.243986 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-17 03:05:12.243994 | orchestrator | Friday 17 April 2026 03:05:11 +0000 (0:00:00.338) 0:03:36.302 ********** 2026-04-17 03:05:12.244002 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-17 03:05:12.244010 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:05:12.244018 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-17 03:05:12.244026 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:05:12.244034 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-17 03:05:12.244046 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:05:12.244054 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-17 03:05:12.244062 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:05:12.244070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 03:05:12.244078 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 03:05:12.244086 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 03:05:12.244094 | orchestrator | 2026-04-17 03:05:12.244102 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-17 03:05:12.244110 | orchestrator | Friday 17 April 2026 03:05:12 +0000 (0:00:00.611) 0:03:36.913 ********** 2026-04-17 03:05:12.244117 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-17 03:05:12.244127 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-17 03:05:12.244135 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-17 03:05:12.244143 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-17 03:05:12.244150 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-17 03:05:12.244164 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-17 03:05:16.497640 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-17 03:05:16.497729 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-17 03:05:16.497762 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-17 03:05:16.497771 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-17 03:05:16.497786 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-17 03:05:16.497806 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-17 03:05:16.497821 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-17 03:05:16.497834 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-17 03:05:16.497848 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-17 03:05:16.497861 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-17 03:05:16.497873 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-17 03:05:16.497886 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-17 03:05:16.497899 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-17 03:05:16.497925 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-17 03:05:16.497938 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-17 03:05:16.497952 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-17 03:05:16.497964 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-17 03:05:16.497979 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-17 03:05:16.497990 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-17 03:05:16.497998 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-17 03:05:16.498006 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-17 03:05:16.498014 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-17 03:05:16.498076 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-17 03:05:16.498084 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-17 03:05:16.498092 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-17 03:05:16.498100 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-17 03:05:16.498108 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:05:16.498118 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-17 03:05:16.498126 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-17 03:05:16.498147 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-17 03:05:16.498155 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-17 03:05:16.498163 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-17 03:05:16.498171 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-17 03:05:16.498178 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-17 03:05:16.498188 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-17 03:05:16.498208 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:05:16.498218 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:05:16.498249 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:05:16.498258 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-17 03:05:16.498268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-17 03:05:16.498277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-17 03:05:16.498287 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-17 03:05:16.498296 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-17 03:05:16.498320 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-17 03:05:16.498329 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-17 03:05:16.498339 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-17 03:05:16.498348 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-17 03:05:16.498357 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-17 03:05:16.498367 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-17 03:05:16.498376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-17 03:05:16.498385 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-17 03:05:16.498394 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-17 03:05:16.498403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-17 03:05:16.498412 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-17 03:05:16.498422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-17 03:05:16.498436 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-17 03:05:16.498449 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-17 03:05:16.498462 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-17 03:05:16.498484 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-17 03:05:16.498497 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-17 03:05:16.498511 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-17 03:05:16.498524 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-17 03:05:16.498537 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-17 03:05:16.498549 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-17 03:05:16.498561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-17 03:05:16.498573 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-17 03:05:16.498585 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-17 03:05:16.498598 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-17 03:05:16.498621 | orchestrator | 2026-04-17 03:05:16.498635 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-17 03:05:16.498647 | orchestrator | Friday 17 April 2026 03:05:15 +0000 (0:00:03.392) 0:03:40.306 ********** 2026-04-17 03:05:16.498660 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 03:05:16.498672 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 03:05:16.498685 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 03:05:16.498698 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 03:05:16.498717 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 03:05:16.498730 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 03:05:16.498743 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 03:05:16.498757 | orchestrator | 2026-04-17 03:05:16.498766 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-17 03:05:16.498774 | orchestrator | Friday 17 April 2026 03:05:16 +0000 (0:00:00.528) 0:03:40.834 ********** 2026-04-17 03:05:16.498781 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 03:05:16.498789 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:05:16.498797 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 03:05:16.498804 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 03:05:16.498812 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:05:16.498820 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:05:16.498828 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 03:05:16.498835 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:05:16.498843 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 03:05:16.498851 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 03:05:16.498867 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 03:05:29.989805 | orchestrator | 2026-04-17 03:05:29.989922 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-17 03:05:29.989939 | orchestrator | Friday 17 April 2026 03:05:16 +0000 (0:00:00.430) 0:03:41.265 ********** 2026-04-17 03:05:29.989951 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 03:05:29.989967 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 03:05:29.989987 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:05:29.990065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 03:05:29.990091 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 03:05:29.990108 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:05:29.990127 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:05:29.990147 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:05:29.990164 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 03:05:29.990183 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 03:05:29.990202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 03:05:29.990252 | orchestrator | 2026-04-17 03:05:29.990269 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-17 03:05:29.990302 | orchestrator | Friday 17 April 2026 03:05:17 +0000 (0:00:00.526) 0:03:41.791 ********** 2026-04-17 03:05:29.990314 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-17 03:05:29.990325 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:05:29.990336 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-17 03:05:29.990349 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:05:29.990364 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-17 03:05:29.990378 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:05:29.990390 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-17 03:05:29.990402 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:05:29.990415 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-17 03:05:29.990429 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-17 03:05:29.990441 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-17 03:05:29.990454 | orchestrator | 2026-04-17 03:05:29.990466 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-17 03:05:29.990479 | orchestrator | Friday 17 April 2026 03:05:17 +0000 (0:00:00.500) 0:03:42.292 ********** 2026-04-17 03:05:29.990492 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:05:29.990505 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:05:29.990517 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:05:29.990530 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:05:29.990542 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:05:29.990555 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:05:29.990567 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:05:29.990580 | orchestrator | 2026-04-17 03:05:29.990594 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-17 03:05:29.990607 | orchestrator | Friday 17 April 2026 03:05:17 +0000 (0:00:00.235) 0:03:42.528 ********** 2026-04-17 03:05:29.990620 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:29.990634 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:29.990646 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:29.990658 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:29.990670 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:29.990683 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:29.990695 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:29.990707 | orchestrator | 2026-04-17 03:05:29.990721 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-17 03:05:29.990734 | orchestrator | Friday 17 April 2026 03:05:24 +0000 (0:00:06.476) 0:03:49.004 ********** 2026-04-17 03:05:29.990747 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-17 03:05:29.990760 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-17 03:05:29.990770 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:05:29.990781 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-17 03:05:29.990792 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:05:29.990803 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:05:29.990814 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-17 03:05:29.990825 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-17 03:05:29.990836 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:05:29.990847 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-17 03:05:29.990872 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:05:29.990895 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:05:29.990921 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-17 03:05:29.990940 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:05:29.990970 | orchestrator | 2026-04-17 03:05:29.990987 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-17 03:05:29.991006 | orchestrator | Friday 17 April 2026 03:05:24 +0000 (0:00:00.306) 0:03:49.311 ********** 2026-04-17 03:05:29.991024 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-17 03:05:29.991042 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-17 03:05:29.991061 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-17 03:05:29.991105 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-17 03:05:29.991122 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-17 03:05:29.991140 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-17 03:05:29.991167 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-17 03:05:29.991187 | orchestrator | 2026-04-17 03:05:29.991204 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-17 03:05:29.991259 | orchestrator | Friday 17 April 2026 03:05:25 +0000 (0:00:01.134) 0:03:50.446 ********** 2026-04-17 03:05:29.991283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:05:29.991303 | orchestrator | 2026-04-17 03:05:29.991321 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-17 03:05:29.991337 | orchestrator | Friday 17 April 2026 03:05:26 +0000 (0:00:00.406) 0:03:50.852 ********** 2026-04-17 03:05:29.991355 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:29.991373 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:29.991392 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:29.991411 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:29.991430 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:29.991448 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:29.991465 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:29.991483 | orchestrator | 2026-04-17 03:05:29.991502 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-17 03:05:29.991522 | orchestrator | Friday 17 April 2026 03:05:27 +0000 (0:00:01.209) 0:03:52.062 ********** 2026-04-17 03:05:29.991541 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:29.991559 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:29.991570 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:29.991581 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:29.991591 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:29.991602 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:29.991612 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:29.991623 | orchestrator | 2026-04-17 03:05:29.991634 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-17 03:05:29.991645 | orchestrator | Friday 17 April 2026 03:05:27 +0000 (0:00:00.597) 0:03:52.659 ********** 2026-04-17 03:05:29.991655 | orchestrator | changed: [testbed-manager] 2026-04-17 03:05:29.991666 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:05:29.991677 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:05:29.991688 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:05:29.991698 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:05:29.991709 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:05:29.991719 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:05:29.991730 | orchestrator | 2026-04-17 03:05:29.991741 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-17 03:05:29.991752 | orchestrator | Friday 17 April 2026 03:05:28 +0000 (0:00:00.599) 0:03:53.258 ********** 2026-04-17 03:05:29.991763 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:29.991773 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:29.991784 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:29.991794 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:29.991805 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:29.991816 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:29.991826 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:29.991837 | orchestrator | 2026-04-17 03:05:29.991859 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-17 03:05:29.991869 | orchestrator | Friday 17 April 2026 03:05:29 +0000 (0:00:00.546) 0:03:53.805 ********** 2026-04-17 03:05:29.991892 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776393534.1503267, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:29.991907 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776393590.1782768, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:29.991919 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776393622.5071366, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:29.991959 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776393800.5157788, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764038 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776393611.648209, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764120 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776393604.681036, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764127 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776393635.9372513, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764148 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764164 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764168 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764172 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764192 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764196 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764200 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 03:05:34.764208 | orchestrator | 2026-04-17 03:05:34.764213 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-17 03:05:34.764218 | orchestrator | Friday 17 April 2026 03:05:29 +0000 (0:00:00.944) 0:03:54.750 ********** 2026-04-17 03:05:34.764247 | orchestrator | changed: [testbed-manager] 2026-04-17 03:05:34.764254 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:05:34.764257 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:05:34.764262 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:05:34.764269 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:05:34.764274 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:05:34.764280 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:05:34.764287 | orchestrator | 2026-04-17 03:05:34.764293 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-17 03:05:34.764298 | orchestrator | Friday 17 April 2026 03:05:31 +0000 (0:00:01.141) 0:03:55.891 ********** 2026-04-17 03:05:34.764304 | orchestrator | changed: [testbed-manager] 2026-04-17 03:05:34.764310 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:05:34.764316 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:05:34.764321 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:05:34.764327 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:05:34.764333 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:05:34.764338 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:05:34.764344 | orchestrator | 2026-04-17 03:05:34.764354 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-17 03:05:34.764361 | orchestrator | Friday 17 April 2026 03:05:32 +0000 (0:00:01.096) 0:03:56.987 ********** 2026-04-17 03:05:34.764367 | orchestrator | changed: [testbed-manager] 2026-04-17 03:05:34.764373 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:05:34.764380 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:05:34.764384 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:05:34.764388 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:05:34.764391 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:05:34.764395 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:05:34.764399 | orchestrator | 2026-04-17 03:05:34.764402 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-17 03:05:34.764406 | orchestrator | Friday 17 April 2026 03:05:33 +0000 (0:00:01.119) 0:03:58.107 ********** 2026-04-17 03:05:34.764410 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:05:34.764413 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:05:34.764417 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:05:34.764421 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:05:34.764424 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:05:34.764428 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:05:34.764432 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:05:34.764435 | orchestrator | 2026-04-17 03:05:34.764439 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-17 03:05:34.764443 | orchestrator | Friday 17 April 2026 03:05:33 +0000 (0:00:00.281) 0:03:58.389 ********** 2026-04-17 03:05:34.764447 | orchestrator | ok: [testbed-manager] 2026-04-17 03:05:34.764452 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:05:34.764456 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:05:34.764459 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:05:34.764463 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:05:34.764467 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:05:34.764470 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:05:34.764474 | orchestrator | 2026-04-17 03:05:34.764478 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-17 03:05:34.764481 | orchestrator | Friday 17 April 2026 03:05:34 +0000 (0:00:00.758) 0:03:59.147 ********** 2026-04-17 03:05:34.764487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:05:34.764496 | orchestrator | 2026-04-17 03:05:34.764500 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-17 03:05:34.764509 | orchestrator | Friday 17 April 2026 03:05:34 +0000 (0:00:00.382) 0:03:59.530 ********** 2026-04-17 03:06:50.703652 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:50.703744 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:06:50.703756 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:06:50.703763 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:06:50.703770 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:06:50.703776 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:06:50.703783 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:06:50.703790 | orchestrator | 2026-04-17 03:06:50.703798 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-17 03:06:50.703806 | orchestrator | Friday 17 April 2026 03:05:42 +0000 (0:00:08.094) 0:04:07.625 ********** 2026-04-17 03:06:50.703812 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:50.703819 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:50.703826 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:50.703832 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:50.703839 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:50.703846 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:50.703852 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:50.703858 | orchestrator | 2026-04-17 03:06:50.703864 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-17 03:06:50.703871 | orchestrator | Friday 17 April 2026 03:05:44 +0000 (0:00:01.299) 0:04:08.924 ********** 2026-04-17 03:06:50.703877 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:50.703883 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:50.703889 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:50.703895 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:50.703901 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:50.703908 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:50.703914 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:50.703921 | orchestrator | 2026-04-17 03:06:50.703927 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-17 03:06:50.703934 | orchestrator | Friday 17 April 2026 03:05:45 +0000 (0:00:01.052) 0:04:09.977 ********** 2026-04-17 03:06:50.703940 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:50.703957 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:50.703964 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:50.703970 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:50.703977 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:50.703982 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:50.703989 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:50.703995 | orchestrator | 2026-04-17 03:06:50.704002 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-17 03:06:50.704010 | orchestrator | Friday 17 April 2026 03:05:45 +0000 (0:00:00.279) 0:04:10.257 ********** 2026-04-17 03:06:50.704016 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:50.704023 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:50.704029 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:50.704035 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:50.704042 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:50.704049 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:50.704055 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:50.704062 | orchestrator | 2026-04-17 03:06:50.704069 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-17 03:06:50.704076 | orchestrator | Friday 17 April 2026 03:05:45 +0000 (0:00:00.310) 0:04:10.568 ********** 2026-04-17 03:06:50.704082 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:50.704089 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:50.704095 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:50.704122 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:50.704129 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:50.704135 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:50.704142 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:50.704148 | orchestrator | 2026-04-17 03:06:50.704155 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-17 03:06:50.704162 | orchestrator | Friday 17 April 2026 03:05:46 +0000 (0:00:00.278) 0:04:10.847 ********** 2026-04-17 03:06:50.704169 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:50.704175 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:50.704181 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:50.704187 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:50.704193 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:50.704199 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:50.704206 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:50.704213 | orchestrator | 2026-04-17 03:06:50.704220 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-17 03:06:50.704227 | orchestrator | Friday 17 April 2026 03:05:51 +0000 (0:00:05.565) 0:04:16.412 ********** 2026-04-17 03:06:50.704253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:06:50.704262 | orchestrator | 2026-04-17 03:06:50.704269 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-17 03:06:50.704277 | orchestrator | Friday 17 April 2026 03:05:52 +0000 (0:00:00.378) 0:04:16.790 ********** 2026-04-17 03:06:50.704284 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-17 03:06:50.704291 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-17 03:06:50.704298 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-17 03:06:50.704306 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:06:50.704313 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-17 03:06:50.704337 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-17 03:06:50.704345 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-17 03:06:50.704352 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:06:50.704359 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-17 03:06:50.704366 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:06:50.704374 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-17 03:06:50.704381 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-17 03:06:50.704388 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:06:50.704396 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-17 03:06:50.704403 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-17 03:06:50.704411 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-17 03:06:50.704432 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:06:50.704439 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:06:50.704446 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-17 03:06:50.704451 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-17 03:06:50.704458 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:06:50.704464 | orchestrator | 2026-04-17 03:06:50.704470 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-17 03:06:50.704477 | orchestrator | Friday 17 April 2026 03:05:52 +0000 (0:00:00.327) 0:04:17.117 ********** 2026-04-17 03:06:50.704483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:06:50.704490 | orchestrator | 2026-04-17 03:06:50.704496 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-17 03:06:50.704509 | orchestrator | Friday 17 April 2026 03:05:52 +0000 (0:00:00.376) 0:04:17.493 ********** 2026-04-17 03:06:50.704516 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-17 03:06:50.704523 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-17 03:06:50.704529 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:06:50.704536 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:06:50.704542 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-17 03:06:50.704549 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:06:50.704556 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-17 03:06:50.704562 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-17 03:06:50.704568 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:06:50.704573 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-17 03:06:50.704579 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:06:50.704584 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:06:50.704590 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-17 03:06:50.704596 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:06:50.704602 | orchestrator | 2026-04-17 03:06:50.704608 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-17 03:06:50.704613 | orchestrator | Friday 17 April 2026 03:05:53 +0000 (0:00:00.324) 0:04:17.818 ********** 2026-04-17 03:06:50.704619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:06:50.704625 | orchestrator | 2026-04-17 03:06:50.704631 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-17 03:06:50.704637 | orchestrator | Friday 17 April 2026 03:05:53 +0000 (0:00:00.382) 0:04:18.201 ********** 2026-04-17 03:06:50.704643 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:06:50.704650 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:06:50.704655 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:06:50.704666 | orchestrator | changed: [testbed-manager] 2026-04-17 03:06:50.704673 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:06:50.704679 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:06:50.704686 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:06:50.704692 | orchestrator | 2026-04-17 03:06:50.704699 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-17 03:06:50.704705 | orchestrator | Friday 17 April 2026 03:06:27 +0000 (0:00:33.800) 0:04:52.001 ********** 2026-04-17 03:06:50.704712 | orchestrator | changed: [testbed-manager] 2026-04-17 03:06:50.704718 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:06:50.704723 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:06:50.704730 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:06:50.704735 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:06:50.704741 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:06:50.704748 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:06:50.704754 | orchestrator | 2026-04-17 03:06:50.704761 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-17 03:06:50.704768 | orchestrator | Friday 17 April 2026 03:06:35 +0000 (0:00:08.072) 0:05:00.073 ********** 2026-04-17 03:06:50.704774 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:06:50.704781 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:06:50.704787 | orchestrator | changed: [testbed-manager] 2026-04-17 03:06:50.704794 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:06:50.704800 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:06:50.704806 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:06:50.704813 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:06:50.704819 | orchestrator | 2026-04-17 03:06:50.704825 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-17 03:06:50.704837 | orchestrator | Friday 17 April 2026 03:06:42 +0000 (0:00:07.643) 0:05:07.717 ********** 2026-04-17 03:06:50.704844 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:50.704850 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:50.704857 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:50.704864 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:50.704870 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:50.704876 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:50.704883 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:50.704889 | orchestrator | 2026-04-17 03:06:50.704895 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-17 03:06:50.704902 | orchestrator | Friday 17 April 2026 03:06:44 +0000 (0:00:01.812) 0:05:09.529 ********** 2026-04-17 03:06:50.704907 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:06:50.704913 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:06:50.704919 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:06:50.704925 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:06:50.704931 | orchestrator | changed: [testbed-manager] 2026-04-17 03:06:50.704937 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:06:50.704944 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:06:50.704950 | orchestrator | 2026-04-17 03:06:50.704963 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-17 03:06:59.977772 | orchestrator | Friday 17 April 2026 03:06:50 +0000 (0:00:05.931) 0:05:15.460 ********** 2026-04-17 03:06:59.977882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:06:59.977905 | orchestrator | 2026-04-17 03:06:59.977922 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-17 03:06:59.977937 | orchestrator | Friday 17 April 2026 03:06:51 +0000 (0:00:00.375) 0:05:15.836 ********** 2026-04-17 03:06:59.977953 | orchestrator | changed: [testbed-manager] 2026-04-17 03:06:59.977965 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:06:59.977974 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:06:59.977982 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:06:59.977991 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:06:59.978000 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:06:59.978008 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:06:59.978073 | orchestrator | 2026-04-17 03:06:59.978085 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-17 03:06:59.978094 | orchestrator | Friday 17 April 2026 03:06:51 +0000 (0:00:00.729) 0:05:16.566 ********** 2026-04-17 03:06:59.978103 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:59.978113 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:59.978122 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:59.978130 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:59.978139 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:59.978147 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:59.978156 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:59.978164 | orchestrator | 2026-04-17 03:06:59.978173 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-17 03:06:59.978182 | orchestrator | Friday 17 April 2026 03:06:53 +0000 (0:00:01.417) 0:05:17.984 ********** 2026-04-17 03:06:59.978191 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:06:59.978200 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:06:59.978208 | orchestrator | changed: [testbed-manager] 2026-04-17 03:06:59.978217 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:06:59.978225 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:06:59.978254 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:06:59.978264 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:06:59.978273 | orchestrator | 2026-04-17 03:06:59.978281 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-17 03:06:59.978290 | orchestrator | Friday 17 April 2026 03:06:53 +0000 (0:00:00.649) 0:05:18.633 ********** 2026-04-17 03:06:59.978321 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:06:59.978332 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:06:59.978344 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:06:59.978354 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:06:59.978363 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:06:59.978374 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:06:59.978383 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:06:59.978394 | orchestrator | 2026-04-17 03:06:59.978403 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-17 03:06:59.978412 | orchestrator | Friday 17 April 2026 03:06:54 +0000 (0:00:00.238) 0:05:18.872 ********** 2026-04-17 03:06:59.978420 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:06:59.978429 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:06:59.978450 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:06:59.978459 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:06:59.978468 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:06:59.978476 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:06:59.978485 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:06:59.978493 | orchestrator | 2026-04-17 03:06:59.978502 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-17 03:06:59.978510 | orchestrator | Friday 17 April 2026 03:06:54 +0000 (0:00:00.322) 0:05:19.194 ********** 2026-04-17 03:06:59.978519 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:59.978528 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:59.978536 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:59.978545 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:59.978553 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:59.978562 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:59.978570 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:59.978578 | orchestrator | 2026-04-17 03:06:59.978587 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-17 03:06:59.978596 | orchestrator | Friday 17 April 2026 03:06:54 +0000 (0:00:00.243) 0:05:19.437 ********** 2026-04-17 03:06:59.978605 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:06:59.978613 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:06:59.978622 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:06:59.978630 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:06:59.978638 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:06:59.978647 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:06:59.978655 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:06:59.978664 | orchestrator | 2026-04-17 03:06:59.978672 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-17 03:06:59.978682 | orchestrator | Friday 17 April 2026 03:06:54 +0000 (0:00:00.218) 0:05:19.656 ********** 2026-04-17 03:06:59.978690 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:59.978699 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:59.978707 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:59.978716 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:59.978724 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:59.978733 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:59.978741 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:59.978750 | orchestrator | 2026-04-17 03:06:59.978759 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-17 03:06:59.978767 | orchestrator | Friday 17 April 2026 03:06:55 +0000 (0:00:00.249) 0:05:19.905 ********** 2026-04-17 03:06:59.978776 | orchestrator | ok: [testbed-manager] =>  2026-04-17 03:06:59.978784 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 03:06:59.978793 | orchestrator | ok: [testbed-node-3] =>  2026-04-17 03:06:59.978801 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 03:06:59.978810 | orchestrator | ok: [testbed-node-4] =>  2026-04-17 03:06:59.978818 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 03:06:59.978827 | orchestrator | ok: [testbed-node-5] =>  2026-04-17 03:06:59.978836 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 03:06:59.978869 | orchestrator | ok: [testbed-node-0] =>  2026-04-17 03:06:59.978878 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 03:06:59.978887 | orchestrator | ok: [testbed-node-1] =>  2026-04-17 03:06:59.978895 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 03:06:59.978904 | orchestrator | ok: [testbed-node-2] =>  2026-04-17 03:06:59.978912 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 03:06:59.978921 | orchestrator | 2026-04-17 03:06:59.978929 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-17 03:06:59.978938 | orchestrator | Friday 17 April 2026 03:06:55 +0000 (0:00:00.233) 0:05:20.139 ********** 2026-04-17 03:06:59.978947 | orchestrator | ok: [testbed-manager] =>  2026-04-17 03:06:59.978955 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 03:06:59.978964 | orchestrator | ok: [testbed-node-3] =>  2026-04-17 03:06:59.978972 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 03:06:59.978981 | orchestrator | ok: [testbed-node-4] =>  2026-04-17 03:06:59.978989 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 03:06:59.978998 | orchestrator | ok: [testbed-node-5] =>  2026-04-17 03:06:59.979006 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 03:06:59.979015 | orchestrator | ok: [testbed-node-0] =>  2026-04-17 03:06:59.979023 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 03:06:59.979031 | orchestrator | ok: [testbed-node-1] =>  2026-04-17 03:06:59.979040 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 03:06:59.979049 | orchestrator | ok: [testbed-node-2] =>  2026-04-17 03:06:59.979057 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 03:06:59.979066 | orchestrator | 2026-04-17 03:06:59.979095 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-17 03:06:59.979104 | orchestrator | Friday 17 April 2026 03:06:55 +0000 (0:00:00.222) 0:05:20.361 ********** 2026-04-17 03:06:59.979113 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:06:59.979121 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:06:59.979130 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:06:59.979138 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:06:59.979147 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:06:59.979155 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:06:59.979164 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:06:59.979172 | orchestrator | 2026-04-17 03:06:59.979181 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-17 03:06:59.979190 | orchestrator | Friday 17 April 2026 03:06:55 +0000 (0:00:00.242) 0:05:20.603 ********** 2026-04-17 03:06:59.979198 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:06:59.979207 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:06:59.979215 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:06:59.979224 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:06:59.979232 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:06:59.979254 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:06:59.979262 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:06:59.979271 | orchestrator | 2026-04-17 03:06:59.979279 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-17 03:06:59.979288 | orchestrator | Friday 17 April 2026 03:06:56 +0000 (0:00:00.212) 0:05:20.816 ********** 2026-04-17 03:06:59.979299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:06:59.979310 | orchestrator | 2026-04-17 03:06:59.979324 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-17 03:06:59.979333 | orchestrator | Friday 17 April 2026 03:06:56 +0000 (0:00:00.345) 0:05:21.161 ********** 2026-04-17 03:06:59.979341 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:59.979350 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:59.979359 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:59.979367 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:59.979376 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:59.979401 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:59.979410 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:59.979419 | orchestrator | 2026-04-17 03:06:59.979427 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-17 03:06:59.979436 | orchestrator | Friday 17 April 2026 03:06:57 +0000 (0:00:00.774) 0:05:21.936 ********** 2026-04-17 03:06:59.979455 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:06:59.979475 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:06:59.979484 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:06:59.979492 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:06:59.979500 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:06:59.979509 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:06:59.979517 | orchestrator | ok: [testbed-manager] 2026-04-17 03:06:59.979526 | orchestrator | 2026-04-17 03:06:59.979535 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-17 03:06:59.979545 | orchestrator | Friday 17 April 2026 03:06:59 +0000 (0:00:02.464) 0:05:24.401 ********** 2026-04-17 03:06:59.979553 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-17 03:06:59.979562 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-17 03:06:59.979571 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-17 03:06:59.979580 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-17 03:06:59.979588 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-17 03:06:59.979597 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-17 03:06:59.979605 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:06:59.979614 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-17 03:06:59.979623 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-17 03:06:59.979631 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-17 03:06:59.979640 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:06:59.979648 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-17 03:06:59.979657 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-17 03:06:59.979666 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-17 03:06:59.979674 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:06:59.979683 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-17 03:06:59.979698 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-17 03:07:59.543914 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-17 03:07:59.544081 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:07:59.544109 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-17 03:07:59.544125 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-17 03:07:59.544141 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:07:59.544156 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-17 03:07:59.544172 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:07:59.544186 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-17 03:07:59.544201 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-17 03:07:59.544214 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-17 03:07:59.544354 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:07:59.544369 | orchestrator | 2026-04-17 03:07:59.544398 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-17 03:07:59.544427 | orchestrator | Friday 17 April 2026 03:07:00 +0000 (0:00:00.556) 0:05:24.958 ********** 2026-04-17 03:07:59.544442 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.544456 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.544472 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.544486 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.544503 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.544516 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.544560 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.544575 | orchestrator | 2026-04-17 03:07:59.544589 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-17 03:07:59.544604 | orchestrator | Friday 17 April 2026 03:07:06 +0000 (0:00:06.018) 0:05:30.976 ********** 2026-04-17 03:07:59.544618 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.544632 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.544647 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.544662 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.544676 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.544690 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.544704 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.544718 | orchestrator | 2026-04-17 03:07:59.544733 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-17 03:07:59.544749 | orchestrator | Friday 17 April 2026 03:07:07 +0000 (0:00:01.075) 0:05:32.051 ********** 2026-04-17 03:07:59.544763 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.544778 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.544793 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.544806 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.544819 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.544832 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.544845 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.544858 | orchestrator | 2026-04-17 03:07:59.544871 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-17 03:07:59.544885 | orchestrator | Friday 17 April 2026 03:07:15 +0000 (0:00:08.431) 0:05:40.483 ********** 2026-04-17 03:07:59.544897 | orchestrator | changed: [testbed-manager] 2026-04-17 03:07:59.544909 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.544921 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.544934 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.544946 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.544959 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.544971 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.544983 | orchestrator | 2026-04-17 03:07:59.544997 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-17 03:07:59.545010 | orchestrator | Friday 17 April 2026 03:07:19 +0000 (0:00:03.399) 0:05:43.882 ********** 2026-04-17 03:07:59.545022 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.545036 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.545048 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.545061 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.545074 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.545087 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.545100 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.545114 | orchestrator | 2026-04-17 03:07:59.545126 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-17 03:07:59.545134 | orchestrator | Friday 17 April 2026 03:07:20 +0000 (0:00:01.487) 0:05:45.370 ********** 2026-04-17 03:07:59.545142 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.545150 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.545158 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.545166 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.545174 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.545181 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.545190 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.545197 | orchestrator | 2026-04-17 03:07:59.545205 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-17 03:07:59.545213 | orchestrator | Friday 17 April 2026 03:07:22 +0000 (0:00:01.545) 0:05:46.916 ********** 2026-04-17 03:07:59.545250 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:07:59.545266 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:07:59.545274 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:07:59.545282 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:07:59.545305 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:07:59.545313 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:07:59.545321 | orchestrator | changed: [testbed-manager] 2026-04-17 03:07:59.545329 | orchestrator | 2026-04-17 03:07:59.545337 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-17 03:07:59.545345 | orchestrator | Friday 17 April 2026 03:07:22 +0000 (0:00:00.556) 0:05:47.472 ********** 2026-04-17 03:07:59.545353 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.545361 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.545369 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.545377 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.545385 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.545393 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.545401 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.545408 | orchestrator | 2026-04-17 03:07:59.545417 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-17 03:07:59.545448 | orchestrator | Friday 17 April 2026 03:07:32 +0000 (0:00:09.533) 0:05:57.005 ********** 2026-04-17 03:07:59.545456 | orchestrator | changed: [testbed-manager] 2026-04-17 03:07:59.545464 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.545472 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.545480 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.545488 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.545496 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.545504 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.545512 | orchestrator | 2026-04-17 03:07:59.545520 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-17 03:07:59.545528 | orchestrator | Friday 17 April 2026 03:07:33 +0000 (0:00:00.928) 0:05:57.934 ********** 2026-04-17 03:07:59.545536 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.545544 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.545552 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.545560 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.545568 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.545576 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.545583 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.545591 | orchestrator | 2026-04-17 03:07:59.545599 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-17 03:07:59.545607 | orchestrator | Friday 17 April 2026 03:07:41 +0000 (0:00:08.772) 0:06:06.706 ********** 2026-04-17 03:07:59.545615 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.545623 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.545631 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.545639 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.545647 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.545654 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.545662 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.545670 | orchestrator | 2026-04-17 03:07:59.545678 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-17 03:07:59.545686 | orchestrator | Friday 17 April 2026 03:07:52 +0000 (0:00:10.909) 0:06:17.615 ********** 2026-04-17 03:07:59.545694 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-17 03:07:59.545702 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-17 03:07:59.545710 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-17 03:07:59.545718 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-17 03:07:59.545726 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-17 03:07:59.545734 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-17 03:07:59.545742 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-17 03:07:59.545750 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-17 03:07:59.545757 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-17 03:07:59.545773 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-17 03:07:59.545787 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-17 03:07:59.545860 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-17 03:07:59.545878 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-17 03:07:59.545893 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-17 03:07:59.545907 | orchestrator | 2026-04-17 03:07:59.545922 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-17 03:07:59.545942 | orchestrator | Friday 17 April 2026 03:07:54 +0000 (0:00:01.179) 0:06:18.795 ********** 2026-04-17 03:07:59.545956 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:07:59.545967 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:07:59.545976 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:07:59.545983 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:07:59.545991 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:07:59.545999 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:07:59.546006 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:07:59.546014 | orchestrator | 2026-04-17 03:07:59.546078 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-17 03:07:59.546086 | orchestrator | Friday 17 April 2026 03:07:54 +0000 (0:00:00.482) 0:06:19.277 ********** 2026-04-17 03:07:59.546094 | orchestrator | ok: [testbed-manager] 2026-04-17 03:07:59.546102 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:07:59.546110 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:07:59.546118 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:07:59.546125 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:07:59.546133 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:07:59.546141 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:07:59.546149 | orchestrator | 2026-04-17 03:07:59.546157 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-17 03:07:59.546166 | orchestrator | Friday 17 April 2026 03:07:58 +0000 (0:00:04.128) 0:06:23.406 ********** 2026-04-17 03:07:59.546174 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:07:59.546182 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:07:59.546190 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:07:59.546197 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:07:59.546205 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:07:59.546213 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:07:59.546244 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:07:59.546256 | orchestrator | 2026-04-17 03:07:59.546265 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-17 03:07:59.546274 | orchestrator | Friday 17 April 2026 03:07:59 +0000 (0:00:00.475) 0:06:23.881 ********** 2026-04-17 03:07:59.546282 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-17 03:07:59.546290 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-17 03:07:59.546298 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:07:59.546306 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-17 03:07:59.546313 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-17 03:07:59.546321 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:07:59.546329 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-17 03:07:59.546337 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-17 03:07:59.546345 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:07:59.546365 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-17 03:08:17.678469 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-17 03:08:17.678585 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:08:17.678600 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-17 03:08:17.678608 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-17 03:08:17.678616 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:08:17.678644 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-17 03:08:17.678652 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-17 03:08:17.678660 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:08:17.678667 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-17 03:08:17.678674 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-17 03:08:17.678681 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:08:17.678689 | orchestrator | 2026-04-17 03:08:17.678698 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-17 03:08:17.678707 | orchestrator | Friday 17 April 2026 03:07:59 +0000 (0:00:00.669) 0:06:24.551 ********** 2026-04-17 03:08:17.678714 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:17.678722 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:08:17.678729 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:08:17.678736 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:08:17.678743 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:08:17.678750 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:08:17.678757 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:08:17.678764 | orchestrator | 2026-04-17 03:08:17.678771 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-17 03:08:17.678779 | orchestrator | Friday 17 April 2026 03:08:00 +0000 (0:00:00.469) 0:06:25.020 ********** 2026-04-17 03:08:17.678786 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:17.678793 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:08:17.678800 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:08:17.678807 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:08:17.678814 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:08:17.678821 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:08:17.678828 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:08:17.678835 | orchestrator | 2026-04-17 03:08:17.678843 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-17 03:08:17.678850 | orchestrator | Friday 17 April 2026 03:08:00 +0000 (0:00:00.483) 0:06:25.504 ********** 2026-04-17 03:08:17.678857 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:17.678870 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:08:17.678886 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:08:17.678903 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:08:17.678914 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:08:17.678925 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:08:17.678936 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:08:17.678948 | orchestrator | 2026-04-17 03:08:17.678958 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-17 03:08:17.678971 | orchestrator | Friday 17 April 2026 03:08:01 +0000 (0:00:00.494) 0:06:25.999 ********** 2026-04-17 03:08:17.678983 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.678995 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:17.679009 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:17.679022 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:17.679035 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:17.679047 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:17.679060 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:17.679069 | orchestrator | 2026-04-17 03:08:17.679077 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-17 03:08:17.679086 | orchestrator | Friday 17 April 2026 03:08:03 +0000 (0:00:01.903) 0:06:27.902 ********** 2026-04-17 03:08:17.679095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:08:17.679106 | orchestrator | 2026-04-17 03:08:17.679115 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-17 03:08:17.679124 | orchestrator | Friday 17 April 2026 03:08:03 +0000 (0:00:00.806) 0:06:28.708 ********** 2026-04-17 03:08:17.679148 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.679156 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:17.679163 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:17.679170 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:17.679177 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:17.679214 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:17.679221 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:17.679228 | orchestrator | 2026-04-17 03:08:17.679236 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-17 03:08:17.679243 | orchestrator | Friday 17 April 2026 03:08:04 +0000 (0:00:00.803) 0:06:29.512 ********** 2026-04-17 03:08:17.679250 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.679257 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:17.679264 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:17.679271 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:17.679278 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:17.679285 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:17.679292 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:17.679299 | orchestrator | 2026-04-17 03:08:17.679307 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-17 03:08:17.679314 | orchestrator | Friday 17 April 2026 03:08:05 +0000 (0:00:00.867) 0:06:30.379 ********** 2026-04-17 03:08:17.679321 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.679328 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:17.679335 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:17.679343 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:17.679350 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:17.679357 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:17.679364 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:17.679371 | orchestrator | 2026-04-17 03:08:17.679378 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-17 03:08:17.679401 | orchestrator | Friday 17 April 2026 03:08:07 +0000 (0:00:01.507) 0:06:31.887 ********** 2026-04-17 03:08:17.679409 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:17.679417 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:17.679424 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:17.679431 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:17.679450 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:17.679458 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:17.679472 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:17.679479 | orchestrator | 2026-04-17 03:08:17.679487 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-17 03:08:17.679494 | orchestrator | Friday 17 April 2026 03:08:08 +0000 (0:00:01.354) 0:06:33.242 ********** 2026-04-17 03:08:17.679501 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.679508 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:17.679515 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:17.679522 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:17.679529 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:17.679537 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:17.679544 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:17.679555 | orchestrator | 2026-04-17 03:08:17.679571 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-17 03:08:17.679588 | orchestrator | Friday 17 April 2026 03:08:09 +0000 (0:00:01.286) 0:06:34.528 ********** 2026-04-17 03:08:17.679599 | orchestrator | changed: [testbed-manager] 2026-04-17 03:08:17.679611 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:17.679623 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:17.679634 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:17.679646 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:17.679657 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:17.679669 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:17.679681 | orchestrator | 2026-04-17 03:08:17.679704 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-17 03:08:17.679717 | orchestrator | Friday 17 April 2026 03:08:11 +0000 (0:00:01.385) 0:06:35.914 ********** 2026-04-17 03:08:17.679730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:08:17.679743 | orchestrator | 2026-04-17 03:08:17.679753 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-17 03:08:17.679760 | orchestrator | Friday 17 April 2026 03:08:12 +0000 (0:00:01.004) 0:06:36.918 ********** 2026-04-17 03:08:17.679767 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:17.679778 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.679791 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:17.679803 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:17.679814 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:17.679825 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:17.679837 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:17.679848 | orchestrator | 2026-04-17 03:08:17.679860 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-17 03:08:17.679873 | orchestrator | Friday 17 April 2026 03:08:13 +0000 (0:00:01.349) 0:06:38.268 ********** 2026-04-17 03:08:17.679885 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.679898 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:17.679910 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:17.679923 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:17.679953 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:17.679965 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:17.679972 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:17.679984 | orchestrator | 2026-04-17 03:08:17.679995 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-17 03:08:17.680005 | orchestrator | Friday 17 April 2026 03:08:14 +0000 (0:00:01.023) 0:06:39.292 ********** 2026-04-17 03:08:17.680016 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.680027 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:17.680039 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:17.680051 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:17.680062 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:17.680073 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:17.680085 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:17.680096 | orchestrator | 2026-04-17 03:08:17.680109 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-17 03:08:17.680121 | orchestrator | Friday 17 April 2026 03:08:15 +0000 (0:00:00.992) 0:06:40.285 ********** 2026-04-17 03:08:17.680133 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:17.680143 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:17.680151 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:17.680158 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:17.680165 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:17.680172 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:17.680210 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:17.680218 | orchestrator | 2026-04-17 03:08:17.680226 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-17 03:08:17.680233 | orchestrator | Friday 17 April 2026 03:08:16 +0000 (0:00:01.119) 0:06:41.404 ********** 2026-04-17 03:08:17.680241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:08:17.680248 | orchestrator | 2026-04-17 03:08:17.680256 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 03:08:17.680263 | orchestrator | Friday 17 April 2026 03:08:17 +0000 (0:00:00.769) 0:06:42.174 ********** 2026-04-17 03:08:17.680270 | orchestrator | 2026-04-17 03:08:17.680277 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 03:08:17.680292 | orchestrator | Friday 17 April 2026 03:08:17 +0000 (0:00:00.037) 0:06:42.211 ********** 2026-04-17 03:08:17.680300 | orchestrator | 2026-04-17 03:08:17.680307 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 03:08:17.680314 | orchestrator | Friday 17 April 2026 03:08:17 +0000 (0:00:00.041) 0:06:42.253 ********** 2026-04-17 03:08:17.680321 | orchestrator | 2026-04-17 03:08:17.680328 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 03:08:17.680345 | orchestrator | Friday 17 April 2026 03:08:17 +0000 (0:00:00.036) 0:06:42.290 ********** 2026-04-17 03:08:42.304674 | orchestrator | 2026-04-17 03:08:42.304801 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 03:08:42.304818 | orchestrator | Friday 17 April 2026 03:08:17 +0000 (0:00:00.035) 0:06:42.325 ********** 2026-04-17 03:08:42.304830 | orchestrator | 2026-04-17 03:08:42.304842 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 03:08:42.304854 | orchestrator | Friday 17 April 2026 03:08:17 +0000 (0:00:00.039) 0:06:42.364 ********** 2026-04-17 03:08:42.304864 | orchestrator | 2026-04-17 03:08:42.304875 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 03:08:42.304887 | orchestrator | Friday 17 April 2026 03:08:17 +0000 (0:00:00.035) 0:06:42.400 ********** 2026-04-17 03:08:42.304897 | orchestrator | 2026-04-17 03:08:42.304908 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-17 03:08:42.304919 | orchestrator | Friday 17 April 2026 03:08:17 +0000 (0:00:00.035) 0:06:42.436 ********** 2026-04-17 03:08:42.304929 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:42.304941 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:42.304951 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:42.304962 | orchestrator | 2026-04-17 03:08:42.304973 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-17 03:08:42.304984 | orchestrator | Friday 17 April 2026 03:08:18 +0000 (0:00:00.991) 0:06:43.427 ********** 2026-04-17 03:08:42.304995 | orchestrator | changed: [testbed-manager] 2026-04-17 03:08:42.305007 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:42.305017 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:42.305028 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:42.305038 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:42.305048 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:42.305059 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:42.305069 | orchestrator | 2026-04-17 03:08:42.305080 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-17 03:08:42.305091 | orchestrator | Friday 17 April 2026 03:08:19 +0000 (0:00:01.319) 0:06:44.747 ********** 2026-04-17 03:08:42.305101 | orchestrator | changed: [testbed-manager] 2026-04-17 03:08:42.305112 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:42.305122 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:42.305133 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:42.305143 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:42.305153 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:42.305161 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:42.305197 | orchestrator | 2026-04-17 03:08:42.305208 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-17 03:08:42.305219 | orchestrator | Friday 17 April 2026 03:08:21 +0000 (0:00:01.106) 0:06:45.853 ********** 2026-04-17 03:08:42.305230 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:42.305241 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:42.305253 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:42.305264 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:42.305276 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:42.305287 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:42.305299 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:42.305310 | orchestrator | 2026-04-17 03:08:42.305334 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-17 03:08:42.305384 | orchestrator | Friday 17 April 2026 03:08:23 +0000 (0:00:02.156) 0:06:48.009 ********** 2026-04-17 03:08:42.305411 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:08:42.305422 | orchestrator | 2026-04-17 03:08:42.305434 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-17 03:08:42.305445 | orchestrator | Friday 17 April 2026 03:08:23 +0000 (0:00:00.081) 0:06:48.091 ********** 2026-04-17 03:08:42.305456 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:42.305467 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:42.305478 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:42.305488 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:42.305499 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:42.305509 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:42.305520 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:08:42.305531 | orchestrator | 2026-04-17 03:08:42.305542 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-17 03:08:42.305553 | orchestrator | Friday 17 April 2026 03:08:24 +0000 (0:00:01.001) 0:06:49.092 ********** 2026-04-17 03:08:42.305564 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:42.305574 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:08:42.305584 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:08:42.305594 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:08:42.305604 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:08:42.305614 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:08:42.305625 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:08:42.305635 | orchestrator | 2026-04-17 03:08:42.305645 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-17 03:08:42.305655 | orchestrator | Friday 17 April 2026 03:08:24 +0000 (0:00:00.501) 0:06:49.594 ********** 2026-04-17 03:08:42.305691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:08:42.305704 | orchestrator | 2026-04-17 03:08:42.305713 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-17 03:08:42.305722 | orchestrator | Friday 17 April 2026 03:08:25 +0000 (0:00:01.037) 0:06:50.631 ********** 2026-04-17 03:08:42.305732 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:42.305741 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:42.305751 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:42.305760 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:42.305769 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:42.305778 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:42.305788 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:42.305798 | orchestrator | 2026-04-17 03:08:42.305807 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-17 03:08:42.305816 | orchestrator | Friday 17 April 2026 03:08:26 +0000 (0:00:00.828) 0:06:51.460 ********** 2026-04-17 03:08:42.305826 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-17 03:08:42.305859 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-17 03:08:42.305871 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-17 03:08:42.305882 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-17 03:08:42.305892 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-17 03:08:42.305902 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-17 03:08:42.305913 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-17 03:08:42.305924 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-17 03:08:42.305934 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-17 03:08:42.305944 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-17 03:08:42.305954 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-17 03:08:42.305965 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-17 03:08:42.305987 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-17 03:08:42.305997 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-17 03:08:42.306008 | orchestrator | 2026-04-17 03:08:42.306083 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-17 03:08:42.306095 | orchestrator | Friday 17 April 2026 03:08:29 +0000 (0:00:02.562) 0:06:54.022 ********** 2026-04-17 03:08:42.306106 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:42.306116 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:08:42.306125 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:08:42.306135 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:08:42.306145 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:08:42.306276 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:08:42.306288 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:08:42.306299 | orchestrator | 2026-04-17 03:08:42.306310 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-17 03:08:42.306321 | orchestrator | Friday 17 April 2026 03:08:29 +0000 (0:00:00.507) 0:06:54.530 ********** 2026-04-17 03:08:42.306334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:08:42.306347 | orchestrator | 2026-04-17 03:08:42.306357 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-17 03:08:42.306367 | orchestrator | Friday 17 April 2026 03:08:30 +0000 (0:00:00.805) 0:06:55.336 ********** 2026-04-17 03:08:42.306377 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:42.306386 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:42.306397 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:42.306407 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:42.306418 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:42.306428 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:42.306438 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:42.306449 | orchestrator | 2026-04-17 03:08:42.306460 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-17 03:08:42.306482 | orchestrator | Friday 17 April 2026 03:08:31 +0000 (0:00:00.843) 0:06:56.180 ********** 2026-04-17 03:08:42.306495 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:42.306506 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:42.306516 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:42.306526 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:42.306536 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:42.306546 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:42.306556 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:42.306566 | orchestrator | 2026-04-17 03:08:42.306577 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-17 03:08:42.306586 | orchestrator | Friday 17 April 2026 03:08:32 +0000 (0:00:01.012) 0:06:57.192 ********** 2026-04-17 03:08:42.306596 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:42.306605 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:08:42.306615 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:08:42.306625 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:08:42.306635 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:08:42.306644 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:08:42.306653 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:08:42.306664 | orchestrator | 2026-04-17 03:08:42.306673 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-17 03:08:42.306684 | orchestrator | Friday 17 April 2026 03:08:32 +0000 (0:00:00.502) 0:06:57.694 ********** 2026-04-17 03:08:42.306694 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:42.306779 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:08:42.306795 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:08:42.306806 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:08:42.306817 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:08:42.306844 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:08:42.306856 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:08:42.306866 | orchestrator | 2026-04-17 03:08:42.306876 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-17 03:08:42.306887 | orchestrator | Friday 17 April 2026 03:08:34 +0000 (0:00:01.455) 0:06:59.149 ********** 2026-04-17 03:08:42.306898 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:08:42.306909 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:08:42.306919 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:08:42.306928 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:08:42.306938 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:08:42.306947 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:08:42.306957 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:08:42.306967 | orchestrator | 2026-04-17 03:08:42.306976 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-17 03:08:42.306985 | orchestrator | Friday 17 April 2026 03:08:34 +0000 (0:00:00.482) 0:06:59.631 ********** 2026-04-17 03:08:42.306994 | orchestrator | ok: [testbed-manager] 2026-04-17 03:08:42.307003 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:08:42.307013 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:08:42.307023 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:08:42.307032 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:08:42.307042 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:08:42.307083 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:13.839325 | orchestrator | 2026-04-17 03:09:13.839446 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-17 03:09:13.839463 | orchestrator | Friday 17 April 2026 03:08:42 +0000 (0:00:07.431) 0:07:07.063 ********** 2026-04-17 03:09:13.839475 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.839486 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:13.839497 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:13.839508 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:13.839517 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:13.839527 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:13.839537 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:13.839546 | orchestrator | 2026-04-17 03:09:13.839556 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-17 03:09:13.839566 | orchestrator | Friday 17 April 2026 03:08:43 +0000 (0:00:01.528) 0:07:08.592 ********** 2026-04-17 03:09:13.839576 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.839586 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:13.839596 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:13.839605 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:13.839615 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:13.839624 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:13.839634 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:13.839644 | orchestrator | 2026-04-17 03:09:13.839654 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-17 03:09:13.839663 | orchestrator | Friday 17 April 2026 03:08:45 +0000 (0:00:01.665) 0:07:10.257 ********** 2026-04-17 03:09:13.839673 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.839682 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:13.839692 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:13.839701 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:13.839711 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:13.839721 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:13.839730 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:13.839740 | orchestrator | 2026-04-17 03:09:13.839749 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 03:09:13.839759 | orchestrator | Friday 17 April 2026 03:08:47 +0000 (0:00:01.632) 0:07:11.889 ********** 2026-04-17 03:09:13.839771 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.839783 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:13.839794 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:13.839830 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:13.839841 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:13.839853 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:13.839865 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:13.839876 | orchestrator | 2026-04-17 03:09:13.839887 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 03:09:13.839898 | orchestrator | Friday 17 April 2026 03:08:48 +0000 (0:00:00.894) 0:07:12.784 ********** 2026-04-17 03:09:13.839910 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:09:13.839923 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:09:13.839941 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:09:13.839958 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:09:13.839974 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:09:13.839990 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:09:13.840006 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:09:13.840021 | orchestrator | 2026-04-17 03:09:13.840038 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-17 03:09:13.840056 | orchestrator | Friday 17 April 2026 03:08:48 +0000 (0:00:00.946) 0:07:13.730 ********** 2026-04-17 03:09:13.840073 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:09:13.840085 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:09:13.840097 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:09:13.840109 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:09:13.840120 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:09:13.840131 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:09:13.840142 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:09:13.840154 | orchestrator | 2026-04-17 03:09:13.840163 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-17 03:09:13.840198 | orchestrator | Friday 17 April 2026 03:08:49 +0000 (0:00:00.491) 0:07:14.222 ********** 2026-04-17 03:09:13.840210 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.840238 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:13.840248 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:13.840257 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:13.840267 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:13.840276 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:13.840286 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:13.840295 | orchestrator | 2026-04-17 03:09:13.840305 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-17 03:09:13.840315 | orchestrator | Friday 17 April 2026 03:08:49 +0000 (0:00:00.526) 0:07:14.749 ********** 2026-04-17 03:09:13.840324 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.840334 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:13.840343 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:13.840353 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:13.840363 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:13.840372 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:13.840382 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:13.840391 | orchestrator | 2026-04-17 03:09:13.840400 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-17 03:09:13.840410 | orchestrator | Friday 17 April 2026 03:08:50 +0000 (0:00:00.744) 0:07:15.493 ********** 2026-04-17 03:09:13.840420 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.840430 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:13.840439 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:13.840448 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:13.840458 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:13.840474 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:13.840490 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:13.840514 | orchestrator | 2026-04-17 03:09:13.840533 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-17 03:09:13.840549 | orchestrator | Friday 17 April 2026 03:08:51 +0000 (0:00:00.511) 0:07:16.005 ********** 2026-04-17 03:09:13.840564 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.840579 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:13.840607 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:13.840624 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:13.840640 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:13.840655 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:13.840670 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:13.840686 | orchestrator | 2026-04-17 03:09:13.840728 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-17 03:09:13.840746 | orchestrator | Friday 17 April 2026 03:08:56 +0000 (0:00:05.537) 0:07:21.542 ********** 2026-04-17 03:09:13.840759 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:09:13.840770 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:09:13.840779 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:09:13.840789 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:09:13.840804 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:09:13.840821 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:09:13.840833 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:09:13.840843 | orchestrator | 2026-04-17 03:09:13.840852 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-17 03:09:13.840862 | orchestrator | Friday 17 April 2026 03:08:57 +0000 (0:00:00.494) 0:07:22.037 ********** 2026-04-17 03:09:13.840874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:09:13.840887 | orchestrator | 2026-04-17 03:09:13.840897 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-17 03:09:13.840907 | orchestrator | Friday 17 April 2026 03:08:58 +0000 (0:00:00.964) 0:07:23.002 ********** 2026-04-17 03:09:13.840916 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.840928 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:13.840949 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:13.840972 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:13.840988 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:13.841003 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:13.841018 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:13.841035 | orchestrator | 2026-04-17 03:09:13.841053 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-17 03:09:13.841064 | orchestrator | Friday 17 April 2026 03:09:00 +0000 (0:00:01.965) 0:07:24.967 ********** 2026-04-17 03:09:13.841074 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.841083 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:13.841093 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:13.841102 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:13.841112 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:13.841121 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:13.841130 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:13.841140 | orchestrator | 2026-04-17 03:09:13.841149 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-17 03:09:13.841159 | orchestrator | Friday 17 April 2026 03:09:01 +0000 (0:00:01.102) 0:07:26.070 ********** 2026-04-17 03:09:13.841168 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:13.841219 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:13.841229 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:13.841239 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:13.841248 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:13.841258 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:13.841267 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:13.841277 | orchestrator | 2026-04-17 03:09:13.841287 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-17 03:09:13.841297 | orchestrator | Friday 17 April 2026 03:09:02 +0000 (0:00:00.794) 0:07:26.864 ********** 2026-04-17 03:09:13.841315 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 03:09:13.841327 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 03:09:13.841346 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 03:09:13.841356 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 03:09:13.841366 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 03:09:13.841376 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 03:09:13.841385 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 03:09:13.841395 | orchestrator | 2026-04-17 03:09:13.841404 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-17 03:09:13.841414 | orchestrator | Friday 17 April 2026 03:09:03 +0000 (0:00:01.776) 0:07:28.641 ********** 2026-04-17 03:09:13.841424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:09:13.841434 | orchestrator | 2026-04-17 03:09:13.841443 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-17 03:09:13.841454 | orchestrator | Friday 17 April 2026 03:09:04 +0000 (0:00:00.780) 0:07:29.422 ********** 2026-04-17 03:09:13.841463 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:13.841473 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:13.841483 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:13.841492 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:13.841502 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:13.841511 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:13.841521 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:13.841530 | orchestrator | 2026-04-17 03:09:13.841550 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-17 03:09:43.779944 | orchestrator | Friday 17 April 2026 03:09:13 +0000 (0:00:09.176) 0:07:38.599 ********** 2026-04-17 03:09:43.780045 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:43.780057 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:43.780063 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:43.780069 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:43.780075 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:43.780081 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:43.780086 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:43.780092 | orchestrator | 2026-04-17 03:09:43.780099 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-17 03:09:43.780105 | orchestrator | Friday 17 April 2026 03:09:15 +0000 (0:00:01.912) 0:07:40.511 ********** 2026-04-17 03:09:43.780111 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:43.780117 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:43.780123 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:43.780129 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:43.780134 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:43.780140 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:43.780146 | orchestrator | 2026-04-17 03:09:43.780152 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-17 03:09:43.780158 | orchestrator | Friday 17 April 2026 03:09:17 +0000 (0:00:01.270) 0:07:41.782 ********** 2026-04-17 03:09:43.780163 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.780171 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.780201 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.780207 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.780213 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.780236 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.780242 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.780248 | orchestrator | 2026-04-17 03:09:43.780254 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-17 03:09:43.780260 | orchestrator | 2026-04-17 03:09:43.780265 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-17 03:09:43.780271 | orchestrator | Friday 17 April 2026 03:09:18 +0000 (0:00:01.255) 0:07:43.037 ********** 2026-04-17 03:09:43.780277 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:09:43.780283 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:09:43.780289 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:09:43.780294 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:09:43.780300 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:09:43.780306 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:09:43.780312 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:09:43.780318 | orchestrator | 2026-04-17 03:09:43.780324 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-17 03:09:43.780329 | orchestrator | 2026-04-17 03:09:43.780335 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-17 03:09:43.780341 | orchestrator | Friday 17 April 2026 03:09:18 +0000 (0:00:00.661) 0:07:43.699 ********** 2026-04-17 03:09:43.780347 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.780361 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.780374 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.780380 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.780386 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.780391 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.780397 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.780403 | orchestrator | 2026-04-17 03:09:43.780409 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-17 03:09:43.780427 | orchestrator | Friday 17 April 2026 03:09:20 +0000 (0:00:01.301) 0:07:45.000 ********** 2026-04-17 03:09:43.780433 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:43.780439 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:43.780445 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:43.780450 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:43.780456 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:43.780462 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:43.780468 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:43.780473 | orchestrator | 2026-04-17 03:09:43.780479 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-17 03:09:43.780485 | orchestrator | Friday 17 April 2026 03:09:21 +0000 (0:00:01.511) 0:07:46.511 ********** 2026-04-17 03:09:43.780493 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:09:43.780502 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:09:43.780511 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:09:43.780521 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:09:43.780531 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:09:43.780540 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:09:43.780548 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:09:43.780554 | orchestrator | 2026-04-17 03:09:43.780559 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-17 03:09:43.780565 | orchestrator | Friday 17 April 2026 03:09:22 +0000 (0:00:00.471) 0:07:46.983 ********** 2026-04-17 03:09:43.780572 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:09:43.780580 | orchestrator | 2026-04-17 03:09:43.780586 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-17 03:09:43.780592 | orchestrator | Friday 17 April 2026 03:09:23 +0000 (0:00:00.932) 0:07:47.916 ********** 2026-04-17 03:09:43.780599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:09:43.780613 | orchestrator | 2026-04-17 03:09:43.780619 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-17 03:09:43.780625 | orchestrator | Friday 17 April 2026 03:09:23 +0000 (0:00:00.764) 0:07:48.680 ********** 2026-04-17 03:09:43.780631 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.780637 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.780642 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.780648 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.780654 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.780660 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.780665 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.780671 | orchestrator | 2026-04-17 03:09:43.780690 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-17 03:09:43.780698 | orchestrator | Friday 17 April 2026 03:09:32 +0000 (0:00:08.759) 0:07:57.440 ********** 2026-04-17 03:09:43.780708 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.780717 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.780726 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.780736 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.780746 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.780755 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.780765 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.780775 | orchestrator | 2026-04-17 03:09:43.780785 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-17 03:09:43.780791 | orchestrator | Friday 17 April 2026 03:09:33 +0000 (0:00:00.826) 0:07:58.266 ********** 2026-04-17 03:09:43.780797 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.780802 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.780808 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.780813 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.780819 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.780825 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.780830 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.780836 | orchestrator | 2026-04-17 03:09:43.780842 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-17 03:09:43.780847 | orchestrator | Friday 17 April 2026 03:09:34 +0000 (0:00:01.316) 0:07:59.583 ********** 2026-04-17 03:09:43.780853 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.780859 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.780865 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.780870 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.780876 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.780881 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.780887 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.780892 | orchestrator | 2026-04-17 03:09:43.780898 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-17 03:09:43.780904 | orchestrator | Friday 17 April 2026 03:09:36 +0000 (0:00:01.888) 0:08:01.471 ********** 2026-04-17 03:09:43.780910 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.780915 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.780921 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.780927 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.780932 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.780938 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.780944 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.780949 | orchestrator | 2026-04-17 03:09:43.780955 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-17 03:09:43.780961 | orchestrator | Friday 17 April 2026 03:09:37 +0000 (0:00:01.268) 0:08:02.740 ********** 2026-04-17 03:09:43.780967 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.780972 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.780984 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.780989 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.780995 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.781001 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.781006 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.781012 | orchestrator | 2026-04-17 03:09:43.781017 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-17 03:09:43.781023 | orchestrator | 2026-04-17 03:09:43.781033 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-17 03:09:43.781039 | orchestrator | Friday 17 April 2026 03:09:39 +0000 (0:00:01.051) 0:08:03.792 ********** 2026-04-17 03:09:43.781045 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:09:43.781051 | orchestrator | 2026-04-17 03:09:43.781057 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-17 03:09:43.781063 | orchestrator | Friday 17 April 2026 03:09:39 +0000 (0:00:00.762) 0:08:04.555 ********** 2026-04-17 03:09:43.781068 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:43.781074 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:43.781080 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:43.781086 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:43.781091 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:43.781097 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:43.781102 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:43.781108 | orchestrator | 2026-04-17 03:09:43.781114 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-17 03:09:43.781120 | orchestrator | Friday 17 April 2026 03:09:40 +0000 (0:00:01.038) 0:08:05.593 ********** 2026-04-17 03:09:43.781125 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:43.781131 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:43.781137 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:43.781143 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:43.781148 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:43.781154 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:43.781160 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:43.781165 | orchestrator | 2026-04-17 03:09:43.781171 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-17 03:09:43.781195 | orchestrator | Friday 17 April 2026 03:09:41 +0000 (0:00:01.141) 0:08:06.735 ********** 2026-04-17 03:09:43.781201 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:09:43.781207 | orchestrator | 2026-04-17 03:09:43.781213 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-17 03:09:43.781219 | orchestrator | Friday 17 April 2026 03:09:42 +0000 (0:00:00.943) 0:08:07.679 ********** 2026-04-17 03:09:43.781224 | orchestrator | ok: [testbed-manager] 2026-04-17 03:09:43.781230 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:09:43.781236 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:09:43.781241 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:09:43.781248 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:09:43.781257 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:09:43.781265 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:09:43.781281 | orchestrator | 2026-04-17 03:09:43.781297 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-17 03:09:45.258410 | orchestrator | Friday 17 April 2026 03:09:43 +0000 (0:00:00.861) 0:08:08.540 ********** 2026-04-17 03:09:45.258509 | orchestrator | changed: [testbed-manager] 2026-04-17 03:09:45.258521 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:09:45.258529 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:09:45.258537 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:09:45.258544 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:09:45.258551 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:09:45.258558 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:09:45.258586 | orchestrator | 2026-04-17 03:09:45.258595 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:09:45.258603 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-17 03:09:45.258612 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-17 03:09:45.258619 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-17 03:09:45.258627 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-17 03:09:45.258634 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-04-17 03:09:45.258641 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-17 03:09:45.258648 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-17 03:09:45.258655 | orchestrator | 2026-04-17 03:09:45.258662 | orchestrator | 2026-04-17 03:09:45.258670 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:09:45.258677 | orchestrator | Friday 17 April 2026 03:09:44 +0000 (0:00:01.067) 0:08:09.607 ********** 2026-04-17 03:09:45.258685 | orchestrator | =============================================================================== 2026-04-17 03:09:45.258692 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.40s 2026-04-17 03:09:45.258699 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.48s 2026-04-17 03:09:45.258706 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.80s 2026-04-17 03:09:45.258713 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------ 16.76s 2026-04-17 03:09:45.258734 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.76s 2026-04-17 03:09:45.258747 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.68s 2026-04-17 03:09:45.258759 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.91s 2026-04-17 03:09:45.258779 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.66s 2026-04-17 03:09:45.258793 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.53s 2026-04-17 03:09:45.258805 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.18s 2026-04-17 03:09:45.258816 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.77s 2026-04-17 03:09:45.258827 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.76s 2026-04-17 03:09:45.258838 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.43s 2026-04-17 03:09:45.258849 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.09s 2026-04-17 03:09:45.258860 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.07s 2026-04-17 03:09:45.258872 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.64s 2026-04-17 03:09:45.258884 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.43s 2026-04-17 03:09:45.258897 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.48s 2026-04-17 03:09:45.258907 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.02s 2026-04-17 03:09:45.258920 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.93s 2026-04-17 03:09:45.547584 | orchestrator | + osism apply fail2ban 2026-04-17 03:09:58.146991 | orchestrator | 2026-04-17 03:09:58 | INFO  | Task 81f3bf03-27cb-4488-a1a9-4b5dda684137 (fail2ban) was prepared for execution. 2026-04-17 03:09:58.147136 | orchestrator | 2026-04-17 03:09:58 | INFO  | It takes a moment until task 81f3bf03-27cb-4488-a1a9-4b5dda684137 (fail2ban) has been started and output is visible here. 2026-04-17 03:10:19.197892 | orchestrator | 2026-04-17 03:10:19.197980 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-17 03:10:19.197990 | orchestrator | 2026-04-17 03:10:19.197997 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-17 03:10:19.198003 | orchestrator | Friday 17 April 2026 03:10:02 +0000 (0:00:00.245) 0:00:00.245 ********** 2026-04-17 03:10:19.198009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:10:19.198048 | orchestrator | 2026-04-17 03:10:19.198054 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-17 03:10:19.198060 | orchestrator | Friday 17 April 2026 03:10:03 +0000 (0:00:01.086) 0:00:01.332 ********** 2026-04-17 03:10:19.198066 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:10:19.198072 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:10:19.198077 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:10:19.198083 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:10:19.198088 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:10:19.198093 | orchestrator | changed: [testbed-manager] 2026-04-17 03:10:19.198098 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:10:19.198103 | orchestrator | 2026-04-17 03:10:19.198109 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-17 03:10:19.198114 | orchestrator | Friday 17 April 2026 03:10:14 +0000 (0:00:10.797) 0:00:12.129 ********** 2026-04-17 03:10:19.198119 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:10:19.198124 | orchestrator | changed: [testbed-manager] 2026-04-17 03:10:19.198138 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:10:19.198143 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:10:19.198155 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:10:19.198160 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:10:19.198165 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:10:19.198170 | orchestrator | 2026-04-17 03:10:19.198192 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-17 03:10:19.198198 | orchestrator | Friday 17 April 2026 03:10:15 +0000 (0:00:01.416) 0:00:13.546 ********** 2026-04-17 03:10:19.198203 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:10:19.198209 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:19.198214 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:10:19.198219 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:10:19.198224 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:10:19.198229 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:10:19.198234 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:10:19.198239 | orchestrator | 2026-04-17 03:10:19.198244 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-17 03:10:19.198249 | orchestrator | Friday 17 April 2026 03:10:17 +0000 (0:00:01.397) 0:00:14.944 ********** 2026-04-17 03:10:19.198255 | orchestrator | changed: [testbed-manager] 2026-04-17 03:10:19.198260 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:10:19.198265 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:10:19.198269 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:10:19.198275 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:10:19.198279 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:10:19.198284 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:10:19.198289 | orchestrator | 2026-04-17 03:10:19.198294 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:10:19.198300 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:10:19.198326 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:10:19.198331 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:10:19.198336 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:10:19.198342 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:10:19.198347 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:10:19.198352 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:10:19.198357 | orchestrator | 2026-04-17 03:10:19.198361 | orchestrator | 2026-04-17 03:10:19.198366 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:10:19.198371 | orchestrator | Friday 17 April 2026 03:10:18 +0000 (0:00:01.595) 0:00:16.539 ********** 2026-04-17 03:10:19.198376 | orchestrator | =============================================================================== 2026-04-17 03:10:19.198381 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.80s 2026-04-17 03:10:19.198386 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.60s 2026-04-17 03:10:19.198391 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.42s 2026-04-17 03:10:19.198396 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.40s 2026-04-17 03:10:19.198401 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.09s 2026-04-17 03:10:19.468199 | orchestrator | + osism apply network 2026-04-17 03:10:31.581045 | orchestrator | 2026-04-17 03:10:31 | INFO  | Task 5033d5a7-c530-426b-9ff0-e8c5b2b6db7a (network) was prepared for execution. 2026-04-17 03:10:31.581159 | orchestrator | 2026-04-17 03:10:31 | INFO  | It takes a moment until task 5033d5a7-c530-426b-9ff0-e8c5b2b6db7a (network) has been started and output is visible here. 2026-04-17 03:10:59.174781 | orchestrator | 2026-04-17 03:10:59.174914 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-17 03:10:59.174972 | orchestrator | 2026-04-17 03:10:59.174990 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-17 03:10:59.175005 | orchestrator | Friday 17 April 2026 03:10:35 +0000 (0:00:00.244) 0:00:00.245 ********** 2026-04-17 03:10:59.175020 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:59.175037 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:10:59.175053 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:10:59.175068 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:10:59.175083 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:10:59.175099 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:10:59.175111 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:10:59.175125 | orchestrator | 2026-04-17 03:10:59.175145 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-17 03:10:59.175162 | orchestrator | Friday 17 April 2026 03:10:36 +0000 (0:00:00.692) 0:00:00.937 ********** 2026-04-17 03:10:59.175204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:10:59.175221 | orchestrator | 2026-04-17 03:10:59.175235 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-17 03:10:59.175250 | orchestrator | Friday 17 April 2026 03:10:37 +0000 (0:00:01.161) 0:00:02.099 ********** 2026-04-17 03:10:59.175296 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:59.175312 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:10:59.175326 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:10:59.175341 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:10:59.175357 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:10:59.175378 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:10:59.175394 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:10:59.175411 | orchestrator | 2026-04-17 03:10:59.175427 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-17 03:10:59.175442 | orchestrator | Friday 17 April 2026 03:10:39 +0000 (0:00:01.893) 0:00:03.992 ********** 2026-04-17 03:10:59.175458 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:59.175473 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:10:59.175488 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:10:59.175504 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:10:59.175519 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:10:59.175534 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:10:59.175550 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:10:59.175565 | orchestrator | 2026-04-17 03:10:59.175581 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-17 03:10:59.175597 | orchestrator | Friday 17 April 2026 03:10:41 +0000 (0:00:01.711) 0:00:05.704 ********** 2026-04-17 03:10:59.175613 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-17 03:10:59.175630 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-17 03:10:59.175645 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-17 03:10:59.175659 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-17 03:10:59.175674 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-17 03:10:59.175690 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-17 03:10:59.175706 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-17 03:10:59.175721 | orchestrator | 2026-04-17 03:10:59.175757 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-17 03:10:59.175875 | orchestrator | Friday 17 April 2026 03:10:42 +0000 (0:00:00.947) 0:00:06.652 ********** 2026-04-17 03:10:59.175893 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 03:10:59.175910 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 03:10:59.175926 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 03:10:59.175941 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 03:10:59.175957 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 03:10:59.175971 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 03:10:59.175984 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 03:10:59.175999 | orchestrator | 2026-04-17 03:10:59.176014 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-17 03:10:59.176030 | orchestrator | Friday 17 April 2026 03:10:45 +0000 (0:00:03.118) 0:00:09.770 ********** 2026-04-17 03:10:59.176045 | orchestrator | changed: [testbed-manager] 2026-04-17 03:10:59.176061 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:10:59.176075 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:10:59.176090 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:10:59.176105 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:10:59.176121 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:10:59.176136 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:10:59.176151 | orchestrator | 2026-04-17 03:10:59.176167 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-17 03:10:59.176209 | orchestrator | Friday 17 April 2026 03:10:46 +0000 (0:00:01.546) 0:00:11.316 ********** 2026-04-17 03:10:59.176224 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 03:10:59.176239 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 03:10:59.176255 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 03:10:59.176270 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 03:10:59.176285 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 03:10:59.176317 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 03:10:59.176333 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 03:10:59.176348 | orchestrator | 2026-04-17 03:10:59.176363 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-17 03:10:59.176378 | orchestrator | Friday 17 April 2026 03:10:48 +0000 (0:00:01.774) 0:00:13.091 ********** 2026-04-17 03:10:59.176393 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:59.176408 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:10:59.176422 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:10:59.176437 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:10:59.176451 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:10:59.176464 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:10:59.176478 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:10:59.176493 | orchestrator | 2026-04-17 03:10:59.176508 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-17 03:10:59.176550 | orchestrator | Friday 17 April 2026 03:10:49 +0000 (0:00:01.091) 0:00:14.183 ********** 2026-04-17 03:10:59.176566 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:10:59.176581 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:10:59.176595 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:10:59.176608 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:10:59.176622 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:10:59.176635 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:10:59.176649 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:10:59.176663 | orchestrator | 2026-04-17 03:10:59.176678 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-17 03:10:59.176693 | orchestrator | Friday 17 April 2026 03:10:50 +0000 (0:00:00.662) 0:00:14.845 ********** 2026-04-17 03:10:59.176708 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:59.176724 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:10:59.176738 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:10:59.176753 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:10:59.176768 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:10:59.176783 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:10:59.176798 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:10:59.176812 | orchestrator | 2026-04-17 03:10:59.176827 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-17 03:10:59.176841 | orchestrator | Friday 17 April 2026 03:10:52 +0000 (0:00:02.159) 0:00:17.005 ********** 2026-04-17 03:10:59.176855 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:10:59.176870 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:10:59.176884 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:10:59.176899 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:10:59.176914 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:10:59.176928 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:10:59.176943 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-04-17 03:10:59.176960 | orchestrator | 2026-04-17 03:10:59.176975 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-17 03:10:59.176990 | orchestrator | Friday 17 April 2026 03:10:53 +0000 (0:00:00.906) 0:00:17.911 ********** 2026-04-17 03:10:59.177004 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:59.177020 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:10:59.177034 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:10:59.177049 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:10:59.177063 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:10:59.177079 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:10:59.177093 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:10:59.177108 | orchestrator | 2026-04-17 03:10:59.177122 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-17 03:10:59.177137 | orchestrator | Friday 17 April 2026 03:10:54 +0000 (0:00:01.612) 0:00:19.523 ********** 2026-04-17 03:10:59.177153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:10:59.177343 | orchestrator | 2026-04-17 03:10:59.177366 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-17 03:10:59.177382 | orchestrator | Friday 17 April 2026 03:10:56 +0000 (0:00:01.201) 0:00:20.725 ********** 2026-04-17 03:10:59.177397 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:59.177412 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:10:59.177426 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:10:59.177443 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:10:59.177469 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:10:59.177485 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:10:59.177501 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:10:59.177515 | orchestrator | 2026-04-17 03:10:59.177530 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-17 03:10:59.177545 | orchestrator | Friday 17 April 2026 03:10:57 +0000 (0:00:01.101) 0:00:21.827 ********** 2026-04-17 03:10:59.177560 | orchestrator | ok: [testbed-manager] 2026-04-17 03:10:59.177574 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:10:59.177588 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:10:59.177605 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:10:59.177620 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:10:59.177635 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:10:59.177650 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:10:59.177664 | orchestrator | 2026-04-17 03:10:59.177679 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-17 03:10:59.177693 | orchestrator | Friday 17 April 2026 03:10:57 +0000 (0:00:00.658) 0:00:22.486 ********** 2026-04-17 03:10:59.177709 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 03:10:59.177723 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 03:10:59.177737 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 03:10:59.177753 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 03:10:59.177766 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 03:10:59.177781 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 03:10:59.177795 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 03:10:59.177809 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 03:10:59.177823 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 03:10:59.177836 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 03:10:59.177850 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 03:10:59.177863 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 03:10:59.177876 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 03:10:59.177890 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 03:10:59.177904 | orchestrator | 2026-04-17 03:10:59.177935 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-17 03:11:14.263598 | orchestrator | Friday 17 April 2026 03:10:59 +0000 (0:00:01.230) 0:00:23.716 ********** 2026-04-17 03:11:14.263728 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:11:14.263746 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:11:14.263755 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:11:14.263764 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:11:14.263773 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:11:14.263782 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:11:14.263790 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:11:14.263799 | orchestrator | 2026-04-17 03:11:14.263809 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-17 03:11:14.263840 | orchestrator | Friday 17 April 2026 03:10:59 +0000 (0:00:00.629) 0:00:24.346 ********** 2026-04-17 03:11:14.263851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-5, testbed-node-2, testbed-node-4, testbed-node-3 2026-04-17 03:11:14.263863 | orchestrator | 2026-04-17 03:11:14.263871 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-17 03:11:14.263880 | orchestrator | Friday 17 April 2026 03:11:04 +0000 (0:00:04.452) 0:00:28.798 ********** 2026-04-17 03:11:14.263890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.263900 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.263911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.263920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.263929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.263952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.263966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.263977 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.263992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264069 | orchestrator | 2026-04-17 03:11:14.264077 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-17 03:11:14.264087 | orchestrator | Friday 17 April 2026 03:11:09 +0000 (0:00:04.915) 0:00:33.713 ********** 2026-04-17 03:11:14.264096 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.264105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.264114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.264124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.264135 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.264161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.264172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-17 03:11:14.264283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:14.264340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:19.476962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-17 03:11:19.477096 | orchestrator | 2026-04-17 03:11:19.477117 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-17 03:11:19.477130 | orchestrator | Friday 17 April 2026 03:11:14 +0000 (0:00:05.089) 0:00:38.803 ********** 2026-04-17 03:11:19.477143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:11:19.477155 | orchestrator | 2026-04-17 03:11:19.477166 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-17 03:11:19.477208 | orchestrator | Friday 17 April 2026 03:11:15 +0000 (0:00:01.103) 0:00:39.907 ********** 2026-04-17 03:11:19.477230 | orchestrator | ok: [testbed-manager] 2026-04-17 03:11:19.477243 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:11:19.477254 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:11:19.477265 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:11:19.477282 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:11:19.477300 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:11:19.477317 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:11:19.477345 | orchestrator | 2026-04-17 03:11:19.477366 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-17 03:11:19.477384 | orchestrator | Friday 17 April 2026 03:11:16 +0000 (0:00:00.974) 0:00:40.881 ********** 2026-04-17 03:11:19.477401 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 03:11:19.477420 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 03:11:19.477437 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 03:11:19.477456 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 03:11:19.477475 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:11:19.477497 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 03:11:19.477517 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 03:11:19.477536 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 03:11:19.477554 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 03:11:19.477567 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:11:19.477580 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 03:11:19.477610 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 03:11:19.477623 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 03:11:19.477635 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 03:11:19.477648 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:11:19.477682 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 03:11:19.477695 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 03:11:19.477707 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 03:11:19.477720 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 03:11:19.477732 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:11:19.477744 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 03:11:19.477757 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 03:11:19.477768 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 03:11:19.477781 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 03:11:19.477793 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:11:19.477806 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 03:11:19.477823 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 03:11:19.477841 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 03:11:19.477868 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 03:11:19.477887 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:11:19.477905 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 03:11:19.477922 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 03:11:19.477937 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 03:11:19.477955 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 03:11:19.477972 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:11:19.477989 | orchestrator | 2026-04-17 03:11:19.478007 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-17 03:11:19.478116 | orchestrator | Friday 17 April 2026 03:11:18 +0000 (0:00:01.747) 0:00:42.629 ********** 2026-04-17 03:11:19.478137 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:11:19.478157 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:11:19.478175 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:11:19.478227 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:11:19.478244 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:11:19.478262 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:11:19.478280 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:11:19.478299 | orchestrator | 2026-04-17 03:11:19.478319 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-17 03:11:19.478337 | orchestrator | Friday 17 April 2026 03:11:18 +0000 (0:00:00.578) 0:00:43.207 ********** 2026-04-17 03:11:19.478356 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:11:19.478373 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:11:19.478391 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:11:19.478414 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:11:19.478441 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:11:19.478459 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:11:19.478475 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:11:19.478492 | orchestrator | 2026-04-17 03:11:19.478509 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:11:19.478528 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 03:11:19.478547 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 03:11:19.478585 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 03:11:19.478604 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 03:11:19.478623 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 03:11:19.478643 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 03:11:19.478673 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 03:11:19.478692 | orchestrator | 2026-04-17 03:11:19.478709 | orchestrator | 2026-04-17 03:11:19.478727 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:11:19.478745 | orchestrator | Friday 17 April 2026 03:11:19 +0000 (0:00:00.580) 0:00:43.787 ********** 2026-04-17 03:11:19.478776 | orchestrator | =============================================================================== 2026-04-17 03:11:19.478794 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.09s 2026-04-17 03:11:19.478811 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.92s 2026-04-17 03:11:19.478829 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.45s 2026-04-17 03:11:19.478848 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.12s 2026-04-17 03:11:19.478867 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.16s 2026-04-17 03:11:19.478885 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.89s 2026-04-17 03:11:19.478902 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.77s 2026-04-17 03:11:19.478919 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.75s 2026-04-17 03:11:19.478937 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.71s 2026-04-17 03:11:19.478955 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.61s 2026-04-17 03:11:19.478973 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.55s 2026-04-17 03:11:19.478990 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.23s 2026-04-17 03:11:19.479007 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2026-04-17 03:11:19.479025 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2026-04-17 03:11:19.479042 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2026-04-17 03:11:19.479061 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2026-04-17 03:11:19.479079 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2026-04-17 03:11:19.479096 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2026-04-17 03:11:19.479115 | orchestrator | osism.commons.network : Create required directories --------------------- 0.95s 2026-04-17 03:11:19.479134 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.91s 2026-04-17 03:11:19.661652 | orchestrator | + osism apply wireguard 2026-04-17 03:11:31.518430 | orchestrator | 2026-04-17 03:11:31 | INFO  | Task 3fa78673-9438-4018-a125-30d35b255776 (wireguard) was prepared for execution. 2026-04-17 03:11:31.518522 | orchestrator | 2026-04-17 03:11:31 | INFO  | It takes a moment until task 3fa78673-9438-4018-a125-30d35b255776 (wireguard) has been started and output is visible here. 2026-04-17 03:11:50.819106 | orchestrator | 2026-04-17 03:11:50.819259 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-17 03:11:50.819297 | orchestrator | 2026-04-17 03:11:50.819305 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-17 03:11:50.819312 | orchestrator | Friday 17 April 2026 03:11:35 +0000 (0:00:00.212) 0:00:00.212 ********** 2026-04-17 03:11:50.819320 | orchestrator | ok: [testbed-manager] 2026-04-17 03:11:50.819330 | orchestrator | 2026-04-17 03:11:50.819336 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-17 03:11:50.819343 | orchestrator | Friday 17 April 2026 03:11:37 +0000 (0:00:01.465) 0:00:01.677 ********** 2026-04-17 03:11:50.819351 | orchestrator | changed: [testbed-manager] 2026-04-17 03:11:50.819360 | orchestrator | 2026-04-17 03:11:50.819371 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-17 03:11:50.819378 | orchestrator | Friday 17 April 2026 03:11:43 +0000 (0:00:06.291) 0:00:07.969 ********** 2026-04-17 03:11:50.819385 | orchestrator | changed: [testbed-manager] 2026-04-17 03:11:50.819392 | orchestrator | 2026-04-17 03:11:50.819398 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-17 03:11:50.819406 | orchestrator | Friday 17 April 2026 03:11:43 +0000 (0:00:00.546) 0:00:08.516 ********** 2026-04-17 03:11:50.819412 | orchestrator | changed: [testbed-manager] 2026-04-17 03:11:50.819419 | orchestrator | 2026-04-17 03:11:50.819426 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-17 03:11:50.819433 | orchestrator | Friday 17 April 2026 03:11:44 +0000 (0:00:00.430) 0:00:08.946 ********** 2026-04-17 03:11:50.819439 | orchestrator | ok: [testbed-manager] 2026-04-17 03:11:50.819446 | orchestrator | 2026-04-17 03:11:50.819453 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-17 03:11:50.819460 | orchestrator | Friday 17 April 2026 03:11:45 +0000 (0:00:00.639) 0:00:09.585 ********** 2026-04-17 03:11:50.819466 | orchestrator | ok: [testbed-manager] 2026-04-17 03:11:50.819474 | orchestrator | 2026-04-17 03:11:50.819481 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-17 03:11:50.819488 | orchestrator | Friday 17 April 2026 03:11:45 +0000 (0:00:00.414) 0:00:09.999 ********** 2026-04-17 03:11:50.819495 | orchestrator | ok: [testbed-manager] 2026-04-17 03:11:50.819501 | orchestrator | 2026-04-17 03:11:50.819508 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-17 03:11:50.819514 | orchestrator | Friday 17 April 2026 03:11:45 +0000 (0:00:00.405) 0:00:10.405 ********** 2026-04-17 03:11:50.819520 | orchestrator | changed: [testbed-manager] 2026-04-17 03:11:50.819527 | orchestrator | 2026-04-17 03:11:50.819533 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-17 03:11:50.819539 | orchestrator | Friday 17 April 2026 03:11:47 +0000 (0:00:01.158) 0:00:11.563 ********** 2026-04-17 03:11:50.819546 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 03:11:50.819553 | orchestrator | changed: [testbed-manager] 2026-04-17 03:11:50.819560 | orchestrator | 2026-04-17 03:11:50.819568 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-17 03:11:50.819575 | orchestrator | Friday 17 April 2026 03:11:47 +0000 (0:00:00.939) 0:00:12.503 ********** 2026-04-17 03:11:50.819583 | orchestrator | changed: [testbed-manager] 2026-04-17 03:11:50.819590 | orchestrator | 2026-04-17 03:11:50.819598 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-17 03:11:50.819606 | orchestrator | Friday 17 April 2026 03:11:49 +0000 (0:00:01.654) 0:00:14.158 ********** 2026-04-17 03:11:50.819614 | orchestrator | changed: [testbed-manager] 2026-04-17 03:11:50.819622 | orchestrator | 2026-04-17 03:11:50.819630 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:11:50.819638 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:11:50.819647 | orchestrator | 2026-04-17 03:11:50.819655 | orchestrator | 2026-04-17 03:11:50.819662 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:11:50.819680 | orchestrator | Friday 17 April 2026 03:11:50 +0000 (0:00:00.859) 0:00:15.017 ********** 2026-04-17 03:11:50.819689 | orchestrator | =============================================================================== 2026-04-17 03:11:50.819698 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.29s 2026-04-17 03:11:50.819707 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2026-04-17 03:11:50.819716 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.47s 2026-04-17 03:11:50.819725 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2026-04-17 03:11:50.819733 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2026-04-17 03:11:50.819742 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.86s 2026-04-17 03:11:50.819751 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.64s 2026-04-17 03:11:50.819760 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-04-17 03:11:50.819767 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-04-17 03:11:50.819775 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-04-17 03:11:50.819782 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-04-17 03:11:51.153821 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-17 03:11:51.192529 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-17 03:11:51.192609 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-17 03:11:51.266896 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 186 0 --:--:-- --:--:-- --:--:-- 189 2026-04-17 03:11:51.281314 | orchestrator | + osism apply --environment custom workarounds 2026-04-17 03:11:53.151175 | orchestrator | 2026-04-17 03:11:53 | INFO  | Trying to run play workarounds in environment custom 2026-04-17 03:12:03.227673 | orchestrator | 2026-04-17 03:12:03 | INFO  | Task a64cf02a-50e1-4033-8400-6ba9af4005fc (workarounds) was prepared for execution. 2026-04-17 03:12:03.227791 | orchestrator | 2026-04-17 03:12:03 | INFO  | It takes a moment until task a64cf02a-50e1-4033-8400-6ba9af4005fc (workarounds) has been started and output is visible here. 2026-04-17 03:12:26.884140 | orchestrator | 2026-04-17 03:12:26.884313 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:12:26.884332 | orchestrator | 2026-04-17 03:12:26.884344 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-17 03:12:26.884355 | orchestrator | Friday 17 April 2026 03:12:07 +0000 (0:00:00.099) 0:00:00.099 ********** 2026-04-17 03:12:26.884365 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-17 03:12:26.884376 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-17 03:12:26.884386 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-17 03:12:26.884396 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-17 03:12:26.884405 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-17 03:12:26.884415 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-17 03:12:26.884424 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-17 03:12:26.884434 | orchestrator | 2026-04-17 03:12:26.884444 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-17 03:12:26.884453 | orchestrator | 2026-04-17 03:12:26.884463 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-17 03:12:26.884472 | orchestrator | Friday 17 April 2026 03:12:07 +0000 (0:00:00.577) 0:00:00.677 ********** 2026-04-17 03:12:26.884482 | orchestrator | ok: [testbed-manager] 2026-04-17 03:12:26.884493 | orchestrator | 2026-04-17 03:12:26.884525 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-17 03:12:26.884535 | orchestrator | 2026-04-17 03:12:26.884545 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-17 03:12:26.884555 | orchestrator | Friday 17 April 2026 03:12:09 +0000 (0:00:01.990) 0:00:02.667 ********** 2026-04-17 03:12:26.884565 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:12:26.884575 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:12:26.884584 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:12:26.884594 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:12:26.884603 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:12:26.884612 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:12:26.884622 | orchestrator | 2026-04-17 03:12:26.884632 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-17 03:12:26.884641 | orchestrator | 2026-04-17 03:12:26.884651 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-17 03:12:26.884678 | orchestrator | Friday 17 April 2026 03:12:11 +0000 (0:00:01.662) 0:00:04.329 ********** 2026-04-17 03:12:26.884691 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 03:12:26.884703 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 03:12:26.884715 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 03:12:26.884726 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 03:12:26.884737 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 03:12:26.884748 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 03:12:26.884758 | orchestrator | 2026-04-17 03:12:26.884770 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-17 03:12:26.884781 | orchestrator | Friday 17 April 2026 03:12:12 +0000 (0:00:01.470) 0:00:05.799 ********** 2026-04-17 03:12:26.884793 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:12:26.884804 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:12:26.884814 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:12:26.884823 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:12:26.884832 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:12:26.884842 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:12:26.884851 | orchestrator | 2026-04-17 03:12:26.884861 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-17 03:12:26.884870 | orchestrator | Friday 17 April 2026 03:12:16 +0000 (0:00:03.535) 0:00:09.335 ********** 2026-04-17 03:12:26.884879 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:12:26.884889 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:12:26.884899 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:12:26.884908 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:12:26.884918 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:12:26.884927 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:12:26.884937 | orchestrator | 2026-04-17 03:12:26.884946 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-17 03:12:26.884956 | orchestrator | 2026-04-17 03:12:26.884965 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-17 03:12:26.884975 | orchestrator | Friday 17 April 2026 03:12:17 +0000 (0:00:00.669) 0:00:10.004 ********** 2026-04-17 03:12:26.884984 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:12:26.884994 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:12:26.885003 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:12:26.885013 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:12:26.885022 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:12:26.885032 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:12:26.885041 | orchestrator | changed: [testbed-manager] 2026-04-17 03:12:26.885057 | orchestrator | 2026-04-17 03:12:26.885067 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-17 03:12:26.885076 | orchestrator | Friday 17 April 2026 03:12:18 +0000 (0:00:01.519) 0:00:11.523 ********** 2026-04-17 03:12:26.885086 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:12:26.885095 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:12:26.885105 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:12:26.885114 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:12:26.885123 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:12:26.885133 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:12:26.885159 | orchestrator | changed: [testbed-manager] 2026-04-17 03:12:26.885170 | orchestrator | 2026-04-17 03:12:26.885179 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-17 03:12:26.885189 | orchestrator | Friday 17 April 2026 03:12:20 +0000 (0:00:01.539) 0:00:13.063 ********** 2026-04-17 03:12:26.885217 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:12:26.885228 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:12:26.885237 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:12:26.885247 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:12:26.885256 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:12:26.885266 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:12:26.885275 | orchestrator | ok: [testbed-manager] 2026-04-17 03:12:26.885284 | orchestrator | 2026-04-17 03:12:26.885294 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-17 03:12:26.885304 | orchestrator | Friday 17 April 2026 03:12:21 +0000 (0:00:01.612) 0:00:14.676 ********** 2026-04-17 03:12:26.885313 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:12:26.885323 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:12:26.885332 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:12:26.885342 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:12:26.885351 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:12:26.885361 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:12:26.885370 | orchestrator | changed: [testbed-manager] 2026-04-17 03:12:26.885379 | orchestrator | 2026-04-17 03:12:26.885389 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-17 03:12:26.885399 | orchestrator | Friday 17 April 2026 03:12:23 +0000 (0:00:01.815) 0:00:16.492 ********** 2026-04-17 03:12:26.885408 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:12:26.885417 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:12:26.885427 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:12:26.885436 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:12:26.885446 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:12:26.885455 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:12:26.885465 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:12:26.885474 | orchestrator | 2026-04-17 03:12:26.885484 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-17 03:12:26.885494 | orchestrator | 2026-04-17 03:12:26.885503 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-17 03:12:26.885513 | orchestrator | Friday 17 April 2026 03:12:24 +0000 (0:00:00.631) 0:00:17.123 ********** 2026-04-17 03:12:26.885522 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:12:26.885532 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:12:26.885541 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:12:26.885551 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:12:26.885560 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:12:26.885574 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:12:26.885584 | orchestrator | ok: [testbed-manager] 2026-04-17 03:12:26.885594 | orchestrator | 2026-04-17 03:12:26.885603 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:12:26.885614 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:12:26.885625 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:26.885641 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:26.885650 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:26.885660 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:26.885669 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:26.885679 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:26.885688 | orchestrator | 2026-04-17 03:12:26.885698 | orchestrator | 2026-04-17 03:12:26.885707 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:12:26.885717 | orchestrator | Friday 17 April 2026 03:12:26 +0000 (0:00:02.663) 0:00:19.787 ********** 2026-04-17 03:12:26.885726 | orchestrator | =============================================================================== 2026-04-17 03:12:26.885736 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.54s 2026-04-17 03:12:26.885745 | orchestrator | Install python3-docker -------------------------------------------------- 2.66s 2026-04-17 03:12:26.885755 | orchestrator | Apply netplan configuration --------------------------------------------- 1.99s 2026-04-17 03:12:26.885764 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.82s 2026-04-17 03:12:26.885774 | orchestrator | Apply netplan configuration --------------------------------------------- 1.66s 2026-04-17 03:12:26.885784 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.61s 2026-04-17 03:12:26.885793 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.54s 2026-04-17 03:12:26.885802 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.52s 2026-04-17 03:12:26.885812 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2026-04-17 03:12:26.885821 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2026-04-17 03:12:26.885831 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-04-17 03:12:26.885846 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.58s 2026-04-17 03:12:27.518823 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-17 03:12:39.546391 | orchestrator | 2026-04-17 03:12:39 | INFO  | Task ed0316fe-fdff-4bf9-a969-775af2dfaa9f (reboot) was prepared for execution. 2026-04-17 03:12:39.547467 | orchestrator | 2026-04-17 03:12:39 | INFO  | It takes a moment until task ed0316fe-fdff-4bf9-a969-775af2dfaa9f (reboot) has been started and output is visible here. 2026-04-17 03:12:48.536512 | orchestrator | 2026-04-17 03:12:48.536627 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 03:12:48.536640 | orchestrator | 2026-04-17 03:12:48.536648 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 03:12:48.536656 | orchestrator | Friday 17 April 2026 03:12:43 +0000 (0:00:00.180) 0:00:00.180 ********** 2026-04-17 03:12:48.536663 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:12:48.536671 | orchestrator | 2026-04-17 03:12:48.536678 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 03:12:48.536685 | orchestrator | Friday 17 April 2026 03:12:43 +0000 (0:00:00.092) 0:00:00.273 ********** 2026-04-17 03:12:48.536692 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:12:48.536698 | orchestrator | 2026-04-17 03:12:48.536705 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 03:12:48.536732 | orchestrator | Friday 17 April 2026 03:12:44 +0000 (0:00:00.804) 0:00:01.078 ********** 2026-04-17 03:12:48.536740 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:12:48.536751 | orchestrator | 2026-04-17 03:12:48.536763 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 03:12:48.536773 | orchestrator | 2026-04-17 03:12:48.536784 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 03:12:48.536796 | orchestrator | Friday 17 April 2026 03:12:44 +0000 (0:00:00.104) 0:00:01.183 ********** 2026-04-17 03:12:48.536806 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:12:48.536817 | orchestrator | 2026-04-17 03:12:48.536829 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 03:12:48.536841 | orchestrator | Friday 17 April 2026 03:12:44 +0000 (0:00:00.089) 0:00:01.272 ********** 2026-04-17 03:12:48.536853 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:12:48.536864 | orchestrator | 2026-04-17 03:12:48.536874 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 03:12:48.536899 | orchestrator | Friday 17 April 2026 03:12:44 +0000 (0:00:00.598) 0:00:01.871 ********** 2026-04-17 03:12:48.536911 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:12:48.536922 | orchestrator | 2026-04-17 03:12:48.536933 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 03:12:48.536943 | orchestrator | 2026-04-17 03:12:48.536954 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 03:12:48.536965 | orchestrator | Friday 17 April 2026 03:12:45 +0000 (0:00:00.103) 0:00:01.974 ********** 2026-04-17 03:12:48.536977 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:12:48.536987 | orchestrator | 2026-04-17 03:12:48.536998 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 03:12:48.537009 | orchestrator | Friday 17 April 2026 03:12:45 +0000 (0:00:00.165) 0:00:02.140 ********** 2026-04-17 03:12:48.537019 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:12:48.537030 | orchestrator | 2026-04-17 03:12:48.537041 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 03:12:48.537053 | orchestrator | Friday 17 April 2026 03:12:45 +0000 (0:00:00.606) 0:00:02.747 ********** 2026-04-17 03:12:48.537064 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:12:48.537075 | orchestrator | 2026-04-17 03:12:48.537086 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 03:12:48.537097 | orchestrator | 2026-04-17 03:12:48.537108 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 03:12:48.537121 | orchestrator | Friday 17 April 2026 03:12:45 +0000 (0:00:00.105) 0:00:02.853 ********** 2026-04-17 03:12:48.537131 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:12:48.537142 | orchestrator | 2026-04-17 03:12:48.537152 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 03:12:48.537162 | orchestrator | Friday 17 April 2026 03:12:46 +0000 (0:00:00.104) 0:00:02.957 ********** 2026-04-17 03:12:48.537173 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:12:48.537184 | orchestrator | 2026-04-17 03:12:48.537195 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 03:12:48.537206 | orchestrator | Friday 17 April 2026 03:12:46 +0000 (0:00:00.578) 0:00:03.536 ********** 2026-04-17 03:12:48.537257 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:12:48.537279 | orchestrator | 2026-04-17 03:12:48.537292 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 03:12:48.537303 | orchestrator | 2026-04-17 03:12:48.537315 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 03:12:48.537326 | orchestrator | Friday 17 April 2026 03:12:46 +0000 (0:00:00.099) 0:00:03.636 ********** 2026-04-17 03:12:48.537338 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:12:48.537350 | orchestrator | 2026-04-17 03:12:48.537361 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 03:12:48.537372 | orchestrator | Friday 17 April 2026 03:12:46 +0000 (0:00:00.085) 0:00:03.721 ********** 2026-04-17 03:12:48.537396 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:12:48.537408 | orchestrator | 2026-04-17 03:12:48.537418 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 03:12:48.537430 | orchestrator | Friday 17 April 2026 03:12:47 +0000 (0:00:00.595) 0:00:04.317 ********** 2026-04-17 03:12:48.537440 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:12:48.537451 | orchestrator | 2026-04-17 03:12:48.537461 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 03:12:48.537472 | orchestrator | 2026-04-17 03:12:48.537482 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 03:12:48.537492 | orchestrator | Friday 17 April 2026 03:12:47 +0000 (0:00:00.105) 0:00:04.423 ********** 2026-04-17 03:12:48.537502 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:12:48.537512 | orchestrator | 2026-04-17 03:12:48.537521 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 03:12:48.537532 | orchestrator | Friday 17 April 2026 03:12:47 +0000 (0:00:00.101) 0:00:04.525 ********** 2026-04-17 03:12:48.537542 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:12:48.537552 | orchestrator | 2026-04-17 03:12:48.537562 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 03:12:48.537572 | orchestrator | Friday 17 April 2026 03:12:48 +0000 (0:00:00.589) 0:00:05.114 ********** 2026-04-17 03:12:48.537604 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:12:48.537615 | orchestrator | 2026-04-17 03:12:48.537625 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:12:48.537637 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:48.537650 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:48.537661 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:48.537673 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:48.537685 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:48.537696 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:12:48.537707 | orchestrator | 2026-04-17 03:12:48.537718 | orchestrator | 2026-04-17 03:12:48.537730 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:12:48.537740 | orchestrator | Friday 17 April 2026 03:12:48 +0000 (0:00:00.032) 0:00:05.147 ********** 2026-04-17 03:12:48.537762 | orchestrator | =============================================================================== 2026-04-17 03:12:48.537774 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 3.77s 2026-04-17 03:12:48.537787 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2026-04-17 03:12:48.537798 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2026-04-17 03:12:48.750298 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-17 03:13:00.755017 | orchestrator | 2026-04-17 03:13:00 | INFO  | Task 67b9906b-79b7-44b3-9643-914a1475d0dc (wait-for-connection) was prepared for execution. 2026-04-17 03:13:00.755116 | orchestrator | 2026-04-17 03:13:00 | INFO  | It takes a moment until task 67b9906b-79b7-44b3-9643-914a1475d0dc (wait-for-connection) has been started and output is visible here. 2026-04-17 03:13:17.113549 | orchestrator | 2026-04-17 03:13:17.113661 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-17 03:13:17.113670 | orchestrator | 2026-04-17 03:13:17.113674 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-17 03:13:17.113678 | orchestrator | Friday 17 April 2026 03:13:05 +0000 (0:00:00.233) 0:00:00.233 ********** 2026-04-17 03:13:17.113682 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:13:17.113688 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:13:17.113691 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:13:17.113695 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:13:17.113699 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:13:17.113702 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:13:17.113706 | orchestrator | 2026-04-17 03:13:17.113710 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:13:17.113714 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:13:17.113720 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:13:17.113724 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:13:17.113728 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:13:17.113732 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:13:17.113735 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:13:17.113739 | orchestrator | 2026-04-17 03:13:17.113743 | orchestrator | 2026-04-17 03:13:17.113747 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:13:17.113750 | orchestrator | Friday 17 April 2026 03:13:16 +0000 (0:00:11.525) 0:00:11.758 ********** 2026-04-17 03:13:17.113754 | orchestrator | =============================================================================== 2026-04-17 03:13:17.113758 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.53s 2026-04-17 03:13:17.390826 | orchestrator | + osism apply hddtemp 2026-04-17 03:13:29.347698 | orchestrator | 2026-04-17 03:13:29 | INFO  | Task b893a07f-c8cd-498c-b837-500cd8360e44 (hddtemp) was prepared for execution. 2026-04-17 03:13:29.347805 | orchestrator | 2026-04-17 03:13:29 | INFO  | It takes a moment until task b893a07f-c8cd-498c-b837-500cd8360e44 (hddtemp) has been started and output is visible here. 2026-04-17 03:13:55.793674 | orchestrator | 2026-04-17 03:13:55.793756 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-17 03:13:55.793772 | orchestrator | 2026-04-17 03:13:55.793776 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-17 03:13:55.793781 | orchestrator | Friday 17 April 2026 03:13:33 +0000 (0:00:00.190) 0:00:00.190 ********** 2026-04-17 03:13:55.793785 | orchestrator | ok: [testbed-manager] 2026-04-17 03:13:55.793790 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:13:55.793794 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:13:55.793798 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:13:55.793802 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:13:55.793807 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:13:55.793813 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:13:55.793819 | orchestrator | 2026-04-17 03:13:55.793824 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-17 03:13:55.793834 | orchestrator | Friday 17 April 2026 03:13:33 +0000 (0:00:00.516) 0:00:00.707 ********** 2026-04-17 03:13:55.793845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:13:55.793872 | orchestrator | 2026-04-17 03:13:55.793878 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-17 03:13:55.793885 | orchestrator | Friday 17 April 2026 03:13:34 +0000 (0:00:01.035) 0:00:01.742 ********** 2026-04-17 03:13:55.793902 | orchestrator | ok: [testbed-manager] 2026-04-17 03:13:55.793909 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:13:55.793921 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:13:55.793927 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:13:55.793933 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:13:55.793939 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:13:55.793945 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:13:55.793951 | orchestrator | 2026-04-17 03:13:55.793957 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-17 03:13:55.793977 | orchestrator | Friday 17 April 2026 03:13:36 +0000 (0:00:01.640) 0:00:03.383 ********** 2026-04-17 03:13:55.793984 | orchestrator | changed: [testbed-manager] 2026-04-17 03:13:55.793991 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:13:55.793997 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:13:55.794004 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:13:55.794010 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:13:55.794065 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:13:55.794071 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:13:55.794077 | orchestrator | 2026-04-17 03:13:55.794084 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-17 03:13:55.794089 | orchestrator | Friday 17 April 2026 03:13:37 +0000 (0:00:01.026) 0:00:04.409 ********** 2026-04-17 03:13:55.794096 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:13:55.794102 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:13:55.794109 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:13:55.794115 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:13:55.794121 | orchestrator | ok: [testbed-manager] 2026-04-17 03:13:55.794127 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:13:55.794133 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:13:55.794139 | orchestrator | 2026-04-17 03:13:55.794146 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-17 03:13:55.794152 | orchestrator | Friday 17 April 2026 03:13:39 +0000 (0:00:01.846) 0:00:06.258 ********** 2026-04-17 03:13:55.794158 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:13:55.794164 | orchestrator | changed: [testbed-manager] 2026-04-17 03:13:55.794171 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:13:55.794177 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:13:55.794183 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:13:55.794189 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:13:55.794195 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:13:55.794201 | orchestrator | 2026-04-17 03:13:55.794207 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-17 03:13:55.794214 | orchestrator | Friday 17 April 2026 03:13:40 +0000 (0:00:01.014) 0:00:07.272 ********** 2026-04-17 03:13:55.794218 | orchestrator | changed: [testbed-manager] 2026-04-17 03:13:55.794222 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:13:55.794226 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:13:55.794229 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:13:55.794251 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:13:55.794257 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:13:55.794263 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:13:55.794267 | orchestrator | 2026-04-17 03:13:55.794272 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-17 03:13:55.794277 | orchestrator | Friday 17 April 2026 03:13:52 +0000 (0:00:12.117) 0:00:19.389 ********** 2026-04-17 03:13:55.794281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:13:55.794286 | orchestrator | 2026-04-17 03:13:55.794297 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-17 03:13:55.794301 | orchestrator | Friday 17 April 2026 03:13:53 +0000 (0:00:01.174) 0:00:20.563 ********** 2026-04-17 03:13:55.794305 | orchestrator | changed: [testbed-manager] 2026-04-17 03:13:55.794310 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:13:55.794316 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:13:55.794323 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:13:55.794329 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:13:55.794335 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:13:55.794341 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:13:55.794348 | orchestrator | 2026-04-17 03:13:55.794354 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:13:55.794361 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:13:55.794385 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:13:55.794391 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:13:55.794396 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:13:55.794400 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:13:55.794405 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:13:55.794409 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:13:55.794413 | orchestrator | 2026-04-17 03:13:55.794418 | orchestrator | 2026-04-17 03:13:55.794422 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:13:55.794426 | orchestrator | Friday 17 April 2026 03:13:55 +0000 (0:00:01.817) 0:00:22.380 ********** 2026-04-17 03:13:55.794431 | orchestrator | =============================================================================== 2026-04-17 03:13:55.794435 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.12s 2026-04-17 03:13:55.794439 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.85s 2026-04-17 03:13:55.794444 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.82s 2026-04-17 03:13:55.794452 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.64s 2026-04-17 03:13:55.794456 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2026-04-17 03:13:55.794460 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.04s 2026-04-17 03:13:55.794465 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.03s 2026-04-17 03:13:55.794469 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 1.01s 2026-04-17 03:13:55.794473 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.52s 2026-04-17 03:13:56.080795 | orchestrator | ++ semver 9.5.0 7.1.1 2026-04-17 03:13:56.127598 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 03:13:56.127674 | orchestrator | + sudo systemctl restart manager.service 2026-04-17 03:14:09.902205 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 03:14:09.903194 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-17 03:14:09.903291 | orchestrator | + local max_attempts=60 2026-04-17 03:14:09.903310 | orchestrator | + local name=ceph-ansible 2026-04-17 03:14:09.903323 | orchestrator | + local attempt_num=1 2026-04-17 03:14:09.903350 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:09.931020 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:09.931109 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:09.931122 | orchestrator | + sleep 5 2026-04-17 03:14:14.934811 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:14.963697 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:14.963778 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:14.963788 | orchestrator | + sleep 5 2026-04-17 03:14:19.969333 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:19.992883 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:19.992968 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:19.992981 | orchestrator | + sleep 5 2026-04-17 03:14:24.996644 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:25.032574 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:25.032661 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:25.032670 | orchestrator | + sleep 5 2026-04-17 03:14:30.037648 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:30.075223 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:30.075314 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:30.075321 | orchestrator | + sleep 5 2026-04-17 03:14:35.080751 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:35.117560 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:35.117659 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:35.117675 | orchestrator | + sleep 5 2026-04-17 03:14:40.122352 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:40.160365 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:40.160430 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:40.160436 | orchestrator | + sleep 5 2026-04-17 03:14:45.165418 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:45.205534 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:45.205599 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:45.205608 | orchestrator | + sleep 5 2026-04-17 03:14:50.211347 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:50.244348 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:50.244420 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:50.244429 | orchestrator | + sleep 5 2026-04-17 03:14:55.247962 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:14:55.284051 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 03:14:55.284162 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:14:55.284179 | orchestrator | + sleep 5 2026-04-17 03:15:00.288001 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:15:00.323728 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 03:15:00.323833 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:15:00.323849 | orchestrator | + sleep 5 2026-04-17 03:15:05.328585 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:15:05.370192 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 03:15:05.370312 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:15:05.370323 | orchestrator | + sleep 5 2026-04-17 03:15:10.374554 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:15:10.404141 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 03:15:10.404258 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 03:15:10.404270 | orchestrator | + sleep 5 2026-04-17 03:15:15.408727 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 03:15:15.452856 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:15:15.452928 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-17 03:15:15.452935 | orchestrator | + local max_attempts=60 2026-04-17 03:15:15.452941 | orchestrator | + local name=kolla-ansible 2026-04-17 03:15:15.452946 | orchestrator | + local attempt_num=1 2026-04-17 03:15:15.453484 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-17 03:15:15.490082 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:15:15.490160 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-17 03:15:15.490169 | orchestrator | + local max_attempts=60 2026-04-17 03:15:15.490204 | orchestrator | + local name=osism-ansible 2026-04-17 03:15:15.490211 | orchestrator | + local attempt_num=1 2026-04-17 03:15:15.490884 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-17 03:15:15.523550 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 03:15:15.523617 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-17 03:15:15.523623 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-17 03:15:15.669998 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-17 03:15:15.826421 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-17 03:15:15.988524 | orchestrator | ARA in osism-ansible already disabled. 2026-04-17 03:15:16.124190 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-17 03:15:16.125129 | orchestrator | + osism apply gather-facts 2026-04-17 03:15:28.173905 | orchestrator | 2026-04-17 03:15:28 | INFO  | Task 6723ff6c-3f2d-4d60-9965-48b5613d1f83 (gather-facts) was prepared for execution. 2026-04-17 03:15:28.174082 | orchestrator | 2026-04-17 03:15:28 | INFO  | It takes a moment until task 6723ff6c-3f2d-4d60-9965-48b5613d1f83 (gather-facts) has been started and output is visible here. 2026-04-17 03:15:40.268841 | orchestrator | 2026-04-17 03:15:40.268946 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 03:15:40.268959 | orchestrator | 2026-04-17 03:15:40.268967 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 03:15:40.268974 | orchestrator | Friday 17 April 2026 03:15:32 +0000 (0:00:00.160) 0:00:00.160 ********** 2026-04-17 03:15:40.268980 | orchestrator | ok: [testbed-manager] 2026-04-17 03:15:40.268989 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:15:40.268995 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:15:40.269002 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:15:40.269008 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:15:40.269014 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:15:40.269021 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:15:40.269027 | orchestrator | 2026-04-17 03:15:40.269033 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 03:15:40.269039 | orchestrator | 2026-04-17 03:15:40.269045 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 03:15:40.269052 | orchestrator | Friday 17 April 2026 03:15:39 +0000 (0:00:07.310) 0:00:07.470 ********** 2026-04-17 03:15:40.269058 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:15:40.269065 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:15:40.269071 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:15:40.269078 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:15:40.269084 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:15:40.269090 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:15:40.269096 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:15:40.269102 | orchestrator | 2026-04-17 03:15:40.269108 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:15:40.269115 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:15:40.269123 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:15:40.269129 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:15:40.269138 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:15:40.269148 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:15:40.269163 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:15:40.269174 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 03:15:40.269306 | orchestrator | 2026-04-17 03:15:40.269317 | orchestrator | 2026-04-17 03:15:40.269323 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:15:40.269330 | orchestrator | Friday 17 April 2026 03:15:39 +0000 (0:00:00.512) 0:00:07.983 ********** 2026-04-17 03:15:40.269337 | orchestrator | =============================================================================== 2026-04-17 03:15:40.269347 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.31s 2026-04-17 03:15:40.269357 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-04-17 03:15:40.564631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-17 03:15:40.583058 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-17 03:15:40.602142 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-17 03:15:40.622943 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-17 03:15:40.644601 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-17 03:15:40.658008 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-17 03:15:40.669886 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-17 03:15:40.681121 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-17 03:15:40.693207 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-17 03:15:40.704676 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-17 03:15:40.719206 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-17 03:15:40.733573 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-17 03:15:40.750243 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-17 03:15:40.763686 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-17 03:15:40.782337 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-17 03:15:40.793385 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-17 03:15:40.806318 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-17 03:15:40.815115 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-17 03:15:40.824495 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-17 03:15:40.835402 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-17 03:15:40.853583 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-17 03:15:40.866262 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-17 03:15:40.875838 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-17 03:15:40.888093 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-17 03:15:41.073022 | orchestrator | ok: Runtime: 0:23:29.845220 2026-04-17 03:15:41.271493 | 2026-04-17 03:15:41.271638 | TASK [Deploy services] 2026-04-17 03:15:42.004607 | orchestrator | 2026-04-17 03:15:42.004780 | orchestrator | # DEPLOY SERVICES 2026-04-17 03:15:42.004802 | orchestrator | 2026-04-17 03:15:42.004814 | orchestrator | + set -e 2026-04-17 03:15:42.004828 | orchestrator | + echo 2026-04-17 03:15:42.004844 | orchestrator | + echo '# DEPLOY SERVICES' 2026-04-17 03:15:42.004858 | orchestrator | + echo 2026-04-17 03:15:42.004903 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 03:15:42.004928 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 03:15:42.004940 | orchestrator | ++ INTERACTIVE=false 2026-04-17 03:15:42.004948 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 03:15:42.004964 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 03:15:42.004971 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 03:15:42.004982 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 03:15:42.004990 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 03:15:42.005001 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 03:15:42.005009 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 03:15:42.005019 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 03:15:42.005027 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 03:15:42.005036 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 03:15:42.005044 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 03:15:42.005051 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 03:15:42.005059 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 03:15:42.005066 | orchestrator | ++ export ARA=false 2026-04-17 03:15:42.005074 | orchestrator | ++ ARA=false 2026-04-17 03:15:42.005081 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 03:15:42.005088 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 03:15:42.005095 | orchestrator | ++ export TEMPEST=false 2026-04-17 03:15:42.005103 | orchestrator | ++ TEMPEST=false 2026-04-17 03:15:42.005110 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 03:15:42.005117 | orchestrator | ++ IS_ZUUL=true 2026-04-17 03:15:42.005124 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 03:15:42.005131 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 03:15:42.005139 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 03:15:42.005146 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 03:15:42.005153 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 03:15:42.005164 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 03:15:42.005175 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 03:15:42.005193 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 03:15:42.005206 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 03:15:42.005295 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 03:15:42.005308 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-17 03:15:42.013771 | orchestrator | + set -e 2026-04-17 03:15:42.013947 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 03:15:42.013975 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 03:15:42.013992 | orchestrator | ++ INTERACTIVE=false 2026-04-17 03:15:42.014065 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 03:15:42.014079 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 03:15:42.014088 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 03:15:42.014097 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 03:15:42.014106 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 03:15:42.014115 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 03:15:42.014123 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 03:15:42.014132 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 03:15:42.014141 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 03:15:42.014164 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 03:15:42.014174 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 03:15:42.014183 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 03:15:42.014192 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 03:15:42.014202 | orchestrator | ++ export ARA=false 2026-04-17 03:15:42.014232 | orchestrator | ++ ARA=false 2026-04-17 03:15:42.014249 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 03:15:42.014258 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 03:15:42.014267 | orchestrator | ++ export TEMPEST=false 2026-04-17 03:15:42.014280 | orchestrator | ++ TEMPEST=false 2026-04-17 03:15:42.014289 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 03:15:42.014297 | orchestrator | ++ IS_ZUUL=true 2026-04-17 03:15:42.014306 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 03:15:42.014315 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 03:15:42.014334 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 03:15:42.014343 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 03:15:42.014352 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 03:15:42.014360 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 03:15:42.014370 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 03:15:42.014378 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 03:15:42.014417 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 03:15:42.014426 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 03:15:42.014435 | orchestrator | + echo 2026-04-17 03:15:42.014444 | orchestrator | 2026-04-17 03:15:42.014453 | orchestrator | # PULL IMAGES 2026-04-17 03:15:42.014462 | orchestrator | 2026-04-17 03:15:42.014471 | orchestrator | + echo '# PULL IMAGES' 2026-04-17 03:15:42.014480 | orchestrator | + echo 2026-04-17 03:15:42.015549 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-17 03:15:42.071672 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 03:15:42.071792 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-17 03:15:44.024827 | orchestrator | 2026-04-17 03:15:44 | INFO  | Trying to run play pull-images in environment custom 2026-04-17 03:15:54.134452 | orchestrator | 2026-04-17 03:15:54 | INFO  | Task d444d5e8-f2c0-42e1-92cf-0ca1c473be35 (pull-images) was prepared for execution. 2026-04-17 03:15:54.134546 | orchestrator | 2026-04-17 03:15:54 | INFO  | Task d444d5e8-f2c0-42e1-92cf-0ca1c473be35 is running in background. No more output. Check ARA for logs. 2026-04-17 03:15:54.338751 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-04-17 03:16:06.197769 | orchestrator | 2026-04-17 03:16:06 | INFO  | Task 55c5c354-3cde-47d4-8e19-159c06c711f5 (cgit) was prepared for execution. 2026-04-17 03:16:06.197889 | orchestrator | 2026-04-17 03:16:06 | INFO  | Task 55c5c354-3cde-47d4-8e19-159c06c711f5 is running in background. No more output. Check ARA for logs. 2026-04-17 03:16:18.513457 | orchestrator | 2026-04-17 03:16:18 | INFO  | Task cb06b6b4-290c-4051-8fb4-48a08096f852 (dotfiles) was prepared for execution. 2026-04-17 03:16:18.513536 | orchestrator | 2026-04-17 03:16:18 | INFO  | Task cb06b6b4-290c-4051-8fb4-48a08096f852 is running in background. No more output. Check ARA for logs. 2026-04-17 03:16:30.966475 | orchestrator | 2026-04-17 03:16:30 | INFO  | Task 7110188a-12c8-4733-848b-36efb6d39340 (homer) was prepared for execution. 2026-04-17 03:16:30.966636 | orchestrator | 2026-04-17 03:16:30 | INFO  | Task 7110188a-12c8-4733-848b-36efb6d39340 is running in background. No more output. Check ARA for logs. 2026-04-17 03:16:43.345439 | orchestrator | 2026-04-17 03:16:43 | INFO  | Task 3bea77a1-2c97-4138-b7d3-38bd10d9e788 (phpmyadmin) was prepared for execution. 2026-04-17 03:16:43.345589 | orchestrator | 2026-04-17 03:16:43 | INFO  | Task 3bea77a1-2c97-4138-b7d3-38bd10d9e788 is running in background. No more output. Check ARA for logs. 2026-04-17 03:16:55.830409 | orchestrator | 2026-04-17 03:16:55 | INFO  | Task f566bb15-1e85-468b-b9dc-21a6a73e3c8f (sosreport) was prepared for execution. 2026-04-17 03:16:55.830518 | orchestrator | 2026-04-17 03:16:55 | INFO  | Task f566bb15-1e85-468b-b9dc-21a6a73e3c8f is running in background. No more output. Check ARA for logs. 2026-04-17 03:16:56.120489 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-04-17 03:16:56.125870 | orchestrator | + set -e 2026-04-17 03:16:56.125947 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 03:16:56.125957 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 03:16:56.125965 | orchestrator | ++ INTERACTIVE=false 2026-04-17 03:16:56.125974 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 03:16:56.125981 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 03:16:56.126111 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 03:16:56.126130 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 03:16:56.126141 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 03:16:56.126151 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 03:16:56.126161 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 03:16:56.126173 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 03:16:56.126184 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 03:16:56.126193 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 03:16:56.126199 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 03:16:56.126282 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 03:16:56.126299 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 03:16:56.126310 | orchestrator | ++ export ARA=false 2026-04-17 03:16:56.126321 | orchestrator | ++ ARA=false 2026-04-17 03:16:56.126332 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 03:16:56.126373 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 03:16:56.126383 | orchestrator | ++ export TEMPEST=false 2026-04-17 03:16:56.126392 | orchestrator | ++ TEMPEST=false 2026-04-17 03:16:56.126399 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 03:16:56.126405 | orchestrator | ++ IS_ZUUL=true 2026-04-17 03:16:56.126424 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 03:16:56.126435 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 03:16:56.126441 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 03:16:56.126447 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 03:16:56.126454 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 03:16:56.126462 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 03:16:56.126476 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 03:16:56.126489 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 03:16:56.126499 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 03:16:56.126510 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 03:16:56.126851 | orchestrator | ++ semver 9.5.0 8.0.3 2026-04-17 03:16:56.182308 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 03:16:56.182395 | orchestrator | + osism apply frr 2026-04-17 03:17:08.700782 | orchestrator | 2026-04-17 03:17:08 | INFO  | Task 5223b190-972f-44cf-9597-43b2c2382111 (frr) was prepared for execution. 2026-04-17 03:17:08.700893 | orchestrator | 2026-04-17 03:17:08 | INFO  | It takes a moment until task 5223b190-972f-44cf-9597-43b2c2382111 (frr) has been started and output is visible here. 2026-04-17 03:17:38.821076 | orchestrator | 2026-04-17 03:17:38.821169 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-17 03:17:38.821177 | orchestrator | 2026-04-17 03:17:38.821181 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-17 03:17:38.821190 | orchestrator | Friday 17 April 2026 03:17:15 +0000 (0:00:00.215) 0:00:00.215 ********** 2026-04-17 03:17:38.821195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 03:17:38.821201 | orchestrator | 2026-04-17 03:17:38.821206 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-17 03:17:38.821247 | orchestrator | Friday 17 April 2026 03:17:15 +0000 (0:00:00.225) 0:00:00.440 ********** 2026-04-17 03:17:38.821252 | orchestrator | changed: [testbed-manager] 2026-04-17 03:17:38.821257 | orchestrator | 2026-04-17 03:17:38.821262 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-17 03:17:38.821268 | orchestrator | Friday 17 April 2026 03:17:16 +0000 (0:00:01.053) 0:00:01.494 ********** 2026-04-17 03:17:38.821272 | orchestrator | changed: [testbed-manager] 2026-04-17 03:17:38.821276 | orchestrator | 2026-04-17 03:17:38.821280 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-17 03:17:38.821285 | orchestrator | Friday 17 April 2026 03:17:27 +0000 (0:00:11.178) 0:00:12.672 ********** 2026-04-17 03:17:38.821288 | orchestrator | ok: [testbed-manager] 2026-04-17 03:17:38.821293 | orchestrator | 2026-04-17 03:17:38.821297 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-17 03:17:38.821301 | orchestrator | Friday 17 April 2026 03:17:28 +0000 (0:00:01.100) 0:00:13.772 ********** 2026-04-17 03:17:38.821305 | orchestrator | changed: [testbed-manager] 2026-04-17 03:17:38.821309 | orchestrator | 2026-04-17 03:17:38.821312 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-17 03:17:38.821316 | orchestrator | Friday 17 April 2026 03:17:29 +0000 (0:00:01.098) 0:00:14.871 ********** 2026-04-17 03:17:38.821320 | orchestrator | ok: [testbed-manager] 2026-04-17 03:17:38.821324 | orchestrator | 2026-04-17 03:17:38.821328 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-17 03:17:38.821333 | orchestrator | Friday 17 April 2026 03:17:31 +0000 (0:00:01.491) 0:00:16.362 ********** 2026-04-17 03:17:38.821336 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:17:38.821340 | orchestrator | 2026-04-17 03:17:38.821344 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-17 03:17:38.821348 | orchestrator | Friday 17 April 2026 03:17:31 +0000 (0:00:00.147) 0:00:16.510 ********** 2026-04-17 03:17:38.821368 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:17:38.821373 | orchestrator | 2026-04-17 03:17:38.821377 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-17 03:17:38.821381 | orchestrator | Friday 17 April 2026 03:17:31 +0000 (0:00:00.163) 0:00:16.673 ********** 2026-04-17 03:17:38.821385 | orchestrator | changed: [testbed-manager] 2026-04-17 03:17:38.821388 | orchestrator | 2026-04-17 03:17:38.821392 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-17 03:17:38.821396 | orchestrator | Friday 17 April 2026 03:17:32 +0000 (0:00:01.058) 0:00:17.732 ********** 2026-04-17 03:17:38.821400 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-17 03:17:38.821404 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-17 03:17:38.821409 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-17 03:17:38.821413 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-17 03:17:38.821417 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-17 03:17:38.821421 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-17 03:17:38.821425 | orchestrator | 2026-04-17 03:17:38.821428 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-17 03:17:38.821432 | orchestrator | Friday 17 April 2026 03:17:35 +0000 (0:00:02.473) 0:00:20.206 ********** 2026-04-17 03:17:38.821436 | orchestrator | ok: [testbed-manager] 2026-04-17 03:17:38.821440 | orchestrator | 2026-04-17 03:17:38.821443 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-17 03:17:38.821447 | orchestrator | Friday 17 April 2026 03:17:36 +0000 (0:00:01.756) 0:00:21.963 ********** 2026-04-17 03:17:38.821451 | orchestrator | changed: [testbed-manager] 2026-04-17 03:17:38.821455 | orchestrator | 2026-04-17 03:17:38.821459 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:17:38.821463 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:17:38.821467 | orchestrator | 2026-04-17 03:17:38.821470 | orchestrator | 2026-04-17 03:17:38.821479 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:17:38.821482 | orchestrator | Friday 17 April 2026 03:17:38 +0000 (0:00:01.505) 0:00:23.469 ********** 2026-04-17 03:17:38.821486 | orchestrator | =============================================================================== 2026-04-17 03:17:38.821490 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.18s 2026-04-17 03:17:38.821494 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.47s 2026-04-17 03:17:38.821498 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.76s 2026-04-17 03:17:38.821501 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.51s 2026-04-17 03:17:38.821505 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.49s 2026-04-17 03:17:38.821520 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.10s 2026-04-17 03:17:38.821524 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.10s 2026-04-17 03:17:38.821528 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.06s 2026-04-17 03:17:38.821532 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.05s 2026-04-17 03:17:38.821535 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-04-17 03:17:38.821539 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-04-17 03:17:38.821543 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-04-17 03:17:39.293867 | orchestrator | + osism apply kubernetes 2026-04-17 03:17:42.072150 | orchestrator | 2026-04-17 03:17:42 | INFO  | Task 49e09cd7-8be8-4b56-bb29-67a68c1cbe6d (kubernetes) was prepared for execution. 2026-04-17 03:17:42.072325 | orchestrator | 2026-04-17 03:17:42 | INFO  | It takes a moment until task 49e09cd7-8be8-4b56-bb29-67a68c1cbe6d (kubernetes) has been started and output is visible here. 2026-04-17 03:18:07.721709 | orchestrator | 2026-04-17 03:18:07.721796 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-17 03:18:07.721804 | orchestrator | 2026-04-17 03:18:07.721808 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-17 03:18:07.721813 | orchestrator | Friday 17 April 2026 03:17:47 +0000 (0:00:00.204) 0:00:00.204 ********** 2026-04-17 03:18:07.721818 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:18:07.721823 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:18:07.721827 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:18:07.721831 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:18:07.721835 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:18:07.721839 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:18:07.721842 | orchestrator | 2026-04-17 03:18:07.721846 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-17 03:18:07.721850 | orchestrator | Friday 17 April 2026 03:17:48 +0000 (0:00:00.744) 0:00:00.949 ********** 2026-04-17 03:18:07.721854 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.721858 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.721862 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.721866 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.721870 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.721873 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.721877 | orchestrator | 2026-04-17 03:18:07.721881 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-17 03:18:07.721886 | orchestrator | Friday 17 April 2026 03:17:48 +0000 (0:00:00.611) 0:00:01.561 ********** 2026-04-17 03:18:07.721890 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.721894 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.721897 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.721901 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.721905 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.721909 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.721912 | orchestrator | 2026-04-17 03:18:07.721916 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-17 03:18:07.721920 | orchestrator | Friday 17 April 2026 03:17:49 +0000 (0:00:00.570) 0:00:02.131 ********** 2026-04-17 03:18:07.721924 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:18:07.721927 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:18:07.721931 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:18:07.721938 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:18:07.721942 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:18:07.721945 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:18:07.721949 | orchestrator | 2026-04-17 03:18:07.721953 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-17 03:18:07.721957 | orchestrator | Friday 17 April 2026 03:17:51 +0000 (0:00:02.326) 0:00:04.458 ********** 2026-04-17 03:18:07.721961 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:18:07.721965 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:18:07.721968 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:18:07.721972 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:18:07.721976 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:18:07.721980 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:18:07.721983 | orchestrator | 2026-04-17 03:18:07.721987 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-17 03:18:07.721991 | orchestrator | Friday 17 April 2026 03:17:53 +0000 (0:00:01.780) 0:00:06.239 ********** 2026-04-17 03:18:07.721995 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:18:07.722048 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:18:07.722056 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:18:07.722062 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:18:07.722068 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:18:07.722074 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:18:07.722080 | orchestrator | 2026-04-17 03:18:07.722092 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-17 03:18:07.722098 | orchestrator | Friday 17 April 2026 03:17:54 +0000 (0:00:00.938) 0:00:07.177 ********** 2026-04-17 03:18:07.722104 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.722110 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.722116 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.722122 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.722128 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.722134 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.722139 | orchestrator | 2026-04-17 03:18:07.722145 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-17 03:18:07.722151 | orchestrator | Friday 17 April 2026 03:17:55 +0000 (0:00:00.706) 0:00:07.884 ********** 2026-04-17 03:18:07.722157 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.722163 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.722169 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.722176 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.722182 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.722188 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.722193 | orchestrator | 2026-04-17 03:18:07.722199 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-17 03:18:07.722205 | orchestrator | Friday 17 April 2026 03:17:55 +0000 (0:00:00.568) 0:00:08.452 ********** 2026-04-17 03:18:07.722210 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 03:18:07.722237 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 03:18:07.722243 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.722248 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 03:18:07.722254 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 03:18:07.722260 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.722267 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 03:18:07.722273 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 03:18:07.722279 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.722285 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 03:18:07.722307 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 03:18:07.722315 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.722321 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 03:18:07.722328 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 03:18:07.722334 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.722341 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 03:18:07.722347 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 03:18:07.722353 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.722358 | orchestrator | 2026-04-17 03:18:07.722362 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-17 03:18:07.722374 | orchestrator | Friday 17 April 2026 03:17:56 +0000 (0:00:00.578) 0:00:09.031 ********** 2026-04-17 03:18:07.722379 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.722389 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.722393 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.722404 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.722409 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.722413 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.722417 | orchestrator | 2026-04-17 03:18:07.722422 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-17 03:18:07.722427 | orchestrator | Friday 17 April 2026 03:17:57 +0000 (0:00:01.123) 0:00:10.154 ********** 2026-04-17 03:18:07.722432 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:18:07.722436 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:18:07.722440 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:18:07.722445 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:18:07.722449 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:18:07.722453 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:18:07.722457 | orchestrator | 2026-04-17 03:18:07.722462 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-17 03:18:07.722466 | orchestrator | Friday 17 April 2026 03:17:58 +0000 (0:00:00.827) 0:00:10.983 ********** 2026-04-17 03:18:07.722471 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:18:07.722475 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:18:07.722479 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:18:07.722484 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:18:07.722488 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:18:07.722493 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:18:07.722497 | orchestrator | 2026-04-17 03:18:07.722501 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-17 03:18:07.722506 | orchestrator | Friday 17 April 2026 03:18:03 +0000 (0:00:05.485) 0:00:16.469 ********** 2026-04-17 03:18:07.722510 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.722518 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.722523 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.722527 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.722532 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.722536 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.722541 | orchestrator | 2026-04-17 03:18:07.722546 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-17 03:18:07.722550 | orchestrator | Friday 17 April 2026 03:18:04 +0000 (0:00:00.916) 0:00:17.385 ********** 2026-04-17 03:18:07.722555 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.722559 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.722563 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.722568 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.722572 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.722576 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.722579 | orchestrator | 2026-04-17 03:18:07.722583 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-17 03:18:07.722589 | orchestrator | Friday 17 April 2026 03:18:05 +0000 (0:00:01.167) 0:00:18.553 ********** 2026-04-17 03:18:07.722592 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.722596 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.722600 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.722603 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.722607 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.722610 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.722614 | orchestrator | 2026-04-17 03:18:07.722618 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-17 03:18:07.722622 | orchestrator | Friday 17 April 2026 03:18:06 +0000 (0:00:00.901) 0:00:19.454 ********** 2026-04-17 03:18:07.722626 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-17 03:18:07.722633 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-17 03:18:07.722637 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:18:07.722641 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-17 03:18:07.722648 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-17 03:18:07.722651 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:18:07.722655 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-17 03:18:07.722659 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-17 03:18:07.722663 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:18:07.722666 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-17 03:18:07.722670 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-17 03:18:07.722674 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:18:07.722677 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-17 03:18:07.722681 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-17 03:18:07.722685 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:18:07.722688 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-17 03:18:07.722692 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-17 03:18:07.722696 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:18:07.722700 | orchestrator | 2026-04-17 03:18:07.722703 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-17 03:18:07.722711 | orchestrator | Friday 17 April 2026 03:18:07 +0000 (0:00:00.956) 0:00:20.411 ********** 2026-04-17 03:19:20.494348 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:19:20.494457 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:19:20.494466 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:19:20.494473 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:19:20.494479 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.494486 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.494492 | orchestrator | 2026-04-17 03:19:20.494501 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-17 03:19:20.494509 | orchestrator | Friday 17 April 2026 03:18:08 +0000 (0:00:00.565) 0:00:20.976 ********** 2026-04-17 03:19:20.494515 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:19:20.494521 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:19:20.494527 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:19:20.494533 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:19:20.494539 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.494545 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.494550 | orchestrator | 2026-04-17 03:19:20.494556 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-17 03:19:20.494562 | orchestrator | 2026-04-17 03:19:20.494568 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-17 03:19:20.494575 | orchestrator | Friday 17 April 2026 03:18:09 +0000 (0:00:00.963) 0:00:21.939 ********** 2026-04-17 03:19:20.494581 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:19:20.494588 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:19:20.494594 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:19:20.494599 | orchestrator | 2026-04-17 03:19:20.494605 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-17 03:19:20.494611 | orchestrator | Friday 17 April 2026 03:18:10 +0000 (0:00:01.137) 0:00:23.077 ********** 2026-04-17 03:19:20.494617 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:19:20.494623 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:19:20.494629 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:19:20.494635 | orchestrator | 2026-04-17 03:19:20.494641 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-17 03:19:20.494648 | orchestrator | Friday 17 April 2026 03:18:11 +0000 (0:00:01.029) 0:00:24.106 ********** 2026-04-17 03:19:20.494654 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:19:20.494661 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:19:20.494667 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:19:20.494674 | orchestrator | 2026-04-17 03:19:20.494680 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-17 03:19:20.494686 | orchestrator | Friday 17 April 2026 03:18:12 +0000 (0:00:00.865) 0:00:24.972 ********** 2026-04-17 03:19:20.494714 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:19:20.494720 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:19:20.494727 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:19:20.494732 | orchestrator | 2026-04-17 03:19:20.494738 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-17 03:19:20.494744 | orchestrator | Friday 17 April 2026 03:18:12 +0000 (0:00:00.637) 0:00:25.609 ********** 2026-04-17 03:19:20.494750 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:19:20.494755 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.494761 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.494767 | orchestrator | 2026-04-17 03:19:20.494773 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-17 03:19:20.494794 | orchestrator | Friday 17 April 2026 03:18:13 +0000 (0:00:00.291) 0:00:25.901 ********** 2026-04-17 03:19:20.494800 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:19:20.494806 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:19:20.494811 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:19:20.494817 | orchestrator | 2026-04-17 03:19:20.494823 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-17 03:19:20.494829 | orchestrator | Friday 17 April 2026 03:18:13 +0000 (0:00:00.779) 0:00:26.681 ********** 2026-04-17 03:19:20.494835 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:19:20.494841 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:19:20.494847 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:19:20.494852 | orchestrator | 2026-04-17 03:19:20.494859 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-17 03:19:20.494865 | orchestrator | Friday 17 April 2026 03:18:15 +0000 (0:00:01.319) 0:00:28.001 ********** 2026-04-17 03:19:20.494872 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:19:20.494878 | orchestrator | 2026-04-17 03:19:20.494883 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-17 03:19:20.494889 | orchestrator | Friday 17 April 2026 03:18:15 +0000 (0:00:00.484) 0:00:28.486 ********** 2026-04-17 03:19:20.494895 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:19:20.494901 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:19:20.494907 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:19:20.494913 | orchestrator | 2026-04-17 03:19:20.494920 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-17 03:19:20.494929 | orchestrator | Friday 17 April 2026 03:18:17 +0000 (0:00:01.434) 0:00:29.920 ********** 2026-04-17 03:19:20.494939 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.494948 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.494958 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:19:20.494968 | orchestrator | 2026-04-17 03:19:20.494977 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-17 03:19:20.494986 | orchestrator | Friday 17 April 2026 03:18:17 +0000 (0:00:00.512) 0:00:30.433 ********** 2026-04-17 03:19:20.494997 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.495006 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.495016 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:19:20.495025 | orchestrator | 2026-04-17 03:19:20.495037 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-17 03:19:20.495046 | orchestrator | Friday 17 April 2026 03:18:18 +0000 (0:00:00.763) 0:00:31.196 ********** 2026-04-17 03:19:20.495055 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.495062 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.495071 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:19:20.495081 | orchestrator | 2026-04-17 03:19:20.495087 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-17 03:19:20.495114 | orchestrator | Friday 17 April 2026 03:18:19 +0000 (0:00:01.300) 0:00:32.496 ********** 2026-04-17 03:19:20.495120 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:19:20.495135 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.495141 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.495147 | orchestrator | 2026-04-17 03:19:20.495152 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-17 03:19:20.495159 | orchestrator | Friday 17 April 2026 03:18:20 +0000 (0:00:00.730) 0:00:33.227 ********** 2026-04-17 03:19:20.495164 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:19:20.495170 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.495176 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.495181 | orchestrator | 2026-04-17 03:19:20.495187 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-17 03:19:20.495192 | orchestrator | Friday 17 April 2026 03:18:20 +0000 (0:00:00.332) 0:00:33.560 ********** 2026-04-17 03:19:20.495198 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:19:20.495203 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:19:20.495209 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:19:20.495216 | orchestrator | 2026-04-17 03:19:20.495344 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-17 03:19:20.495369 | orchestrator | Friday 17 April 2026 03:18:21 +0000 (0:00:01.142) 0:00:34.703 ********** 2026-04-17 03:19:20.495373 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:19:20.495377 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:19:20.495381 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:19:20.495385 | orchestrator | 2026-04-17 03:19:20.495389 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-17 03:19:20.495392 | orchestrator | Friday 17 April 2026 03:18:24 +0000 (0:00:02.921) 0:00:37.624 ********** 2026-04-17 03:19:20.495396 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:19:20.495400 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:19:20.495404 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:19:20.495411 | orchestrator | 2026-04-17 03:19:20.495415 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-17 03:19:20.495420 | orchestrator | Friday 17 April 2026 03:18:25 +0000 (0:00:00.355) 0:00:37.980 ********** 2026-04-17 03:19:20.495424 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 03:19:20.495429 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 03:19:20.495433 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 03:19:20.495437 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 03:19:20.495440 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 03:19:20.495444 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 03:19:20.495448 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 03:19:20.495452 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 03:19:20.495455 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 03:19:20.495459 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-17 03:19:20.495463 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-17 03:19:20.495473 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-17 03:19:20.495477 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-17 03:19:20.495480 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-17 03:19:20.495484 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-17 03:19:20.495488 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:19:20.495491 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:19:20.495495 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:19:20.495499 | orchestrator | 2026-04-17 03:19:20.495506 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-17 03:19:20.495510 | orchestrator | Friday 17 April 2026 03:19:19 +0000 (0:00:53.944) 0:01:31.925 ********** 2026-04-17 03:19:20.495513 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:19:20.495517 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:19:20.495521 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:19:20.495525 | orchestrator | 2026-04-17 03:19:20.495529 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-17 03:19:20.495532 | orchestrator | Friday 17 April 2026 03:19:19 +0000 (0:00:00.301) 0:01:32.226 ********** 2026-04-17 03:19:20.495544 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:20:02.741633 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:20:02.741742 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:20:02.741752 | orchestrator | 2026-04-17 03:20:02.741759 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-17 03:20:02.741766 | orchestrator | Friday 17 April 2026 03:19:20 +0000 (0:00:00.963) 0:01:33.190 ********** 2026-04-17 03:20:02.741772 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:20:02.741778 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:20:02.741784 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:20:02.741789 | orchestrator | 2026-04-17 03:20:02.741795 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-17 03:20:02.741809 | orchestrator | Friday 17 April 2026 03:19:21 +0000 (0:00:01.167) 0:01:34.358 ********** 2026-04-17 03:20:02.741815 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:20:02.741821 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:20:02.741827 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:20:02.741833 | orchestrator | 2026-04-17 03:20:02.741838 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-17 03:20:02.741844 | orchestrator | Friday 17 April 2026 03:19:47 +0000 (0:00:25.583) 0:01:59.942 ********** 2026-04-17 03:20:02.741849 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:20:02.741856 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:20:02.741861 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:20:02.741867 | orchestrator | 2026-04-17 03:20:02.741872 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-17 03:20:02.741878 | orchestrator | Friday 17 April 2026 03:19:47 +0000 (0:00:00.662) 0:02:00.604 ********** 2026-04-17 03:20:02.741884 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:20:02.741889 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:20:02.741895 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:20:02.741900 | orchestrator | 2026-04-17 03:20:02.741906 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-17 03:20:02.741911 | orchestrator | Friday 17 April 2026 03:19:49 +0000 (0:00:01.472) 0:02:02.077 ********** 2026-04-17 03:20:02.741917 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:20:02.741922 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:20:02.741928 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:20:02.741933 | orchestrator | 2026-04-17 03:20:02.741938 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-17 03:20:02.741958 | orchestrator | Friday 17 April 2026 03:19:49 +0000 (0:00:00.628) 0:02:02.705 ********** 2026-04-17 03:20:02.741964 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:20:02.741969 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:20:02.741974 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:20:02.741982 | orchestrator | 2026-04-17 03:20:02.741991 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-17 03:20:02.742002 | orchestrator | Friday 17 April 2026 03:19:50 +0000 (0:00:00.871) 0:02:03.577 ********** 2026-04-17 03:20:02.742079 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:20:02.742090 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:20:02.742099 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:20:02.742108 | orchestrator | 2026-04-17 03:20:02.742116 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-17 03:20:02.742124 | orchestrator | Friday 17 April 2026 03:19:51 +0000 (0:00:00.334) 0:02:03.912 ********** 2026-04-17 03:20:02.742132 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:20:02.742140 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:20:02.742147 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:20:02.742156 | orchestrator | 2026-04-17 03:20:02.742165 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-17 03:20:02.742174 | orchestrator | Friday 17 April 2026 03:19:51 +0000 (0:00:00.644) 0:02:04.556 ********** 2026-04-17 03:20:02.742184 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:20:02.742193 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:20:02.742208 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:20:02.742216 | orchestrator | 2026-04-17 03:20:02.742225 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-17 03:20:02.742254 | orchestrator | Friday 17 April 2026 03:19:52 +0000 (0:00:00.633) 0:02:05.189 ********** 2026-04-17 03:20:02.742263 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:20:02.742271 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:20:02.742278 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:20:02.742286 | orchestrator | 2026-04-17 03:20:02.742294 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-17 03:20:02.742303 | orchestrator | Friday 17 April 2026 03:19:53 +0000 (0:00:00.884) 0:02:06.073 ********** 2026-04-17 03:20:02.742314 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:20:02.742321 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:20:02.742330 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:20:02.742337 | orchestrator | 2026-04-17 03:20:02.742346 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-17 03:20:02.742354 | orchestrator | Friday 17 April 2026 03:19:54 +0000 (0:00:01.054) 0:02:07.128 ********** 2026-04-17 03:20:02.742363 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:20:02.742380 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:20:02.742389 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:20:02.742396 | orchestrator | 2026-04-17 03:20:02.742404 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-17 03:20:02.742412 | orchestrator | Friday 17 April 2026 03:19:54 +0000 (0:00:00.285) 0:02:07.414 ********** 2026-04-17 03:20:02.742422 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:20:02.742433 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:20:02.742442 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:20:02.742450 | orchestrator | 2026-04-17 03:20:02.742458 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-17 03:20:02.742466 | orchestrator | Friday 17 April 2026 03:19:54 +0000 (0:00:00.284) 0:02:07.699 ********** 2026-04-17 03:20:02.742476 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:20:02.742484 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:20:02.742491 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:20:02.742499 | orchestrator | 2026-04-17 03:20:02.742506 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-17 03:20:02.742512 | orchestrator | Friday 17 April 2026 03:19:55 +0000 (0:00:00.629) 0:02:08.329 ********** 2026-04-17 03:20:02.742531 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:20:02.742538 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:20:02.742566 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:20:02.742576 | orchestrator | 2026-04-17 03:20:02.742585 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-17 03:20:02.742594 | orchestrator | Friday 17 April 2026 03:19:56 +0000 (0:00:00.941) 0:02:09.270 ********** 2026-04-17 03:20:02.742603 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 03:20:02.742610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 03:20:02.742618 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 03:20:02.742625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 03:20:02.742633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 03:20:02.742641 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 03:20:02.742649 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 03:20:02.742658 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 03:20:02.742666 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 03:20:02.742675 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-17 03:20:02.742683 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 03:20:02.742691 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 03:20:02.742698 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-17 03:20:02.742706 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 03:20:02.742713 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 03:20:02.742721 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 03:20:02.742729 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 03:20:02.742737 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 03:20:02.742745 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 03:20:02.742753 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 03:20:02.742758 | orchestrator | 2026-04-17 03:20:02.742763 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-17 03:20:02.742767 | orchestrator | 2026-04-17 03:20:02.742772 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-17 03:20:02.742777 | orchestrator | Friday 17 April 2026 03:19:59 +0000 (0:00:02.947) 0:02:12.218 ********** 2026-04-17 03:20:02.742782 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:20:02.742787 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:20:02.742792 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:20:02.742796 | orchestrator | 2026-04-17 03:20:02.742812 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-17 03:20:02.742818 | orchestrator | Friday 17 April 2026 03:19:59 +0000 (0:00:00.320) 0:02:12.538 ********** 2026-04-17 03:20:02.742826 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:20:02.742833 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:20:02.742839 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:20:02.742860 | orchestrator | 2026-04-17 03:20:02.742869 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-17 03:20:02.742877 | orchestrator | Friday 17 April 2026 03:20:00 +0000 (0:00:00.937) 0:02:13.476 ********** 2026-04-17 03:20:02.742884 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:20:02.742892 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:20:02.742899 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:20:02.742906 | orchestrator | 2026-04-17 03:20:02.742913 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-17 03:20:02.742920 | orchestrator | Friday 17 April 2026 03:20:01 +0000 (0:00:00.335) 0:02:13.811 ********** 2026-04-17 03:20:02.742928 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:20:02.742936 | orchestrator | 2026-04-17 03:20:02.742944 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-17 03:20:02.742952 | orchestrator | Friday 17 April 2026 03:20:01 +0000 (0:00:00.508) 0:02:14.320 ********** 2026-04-17 03:20:02.742960 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:20:02.742968 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:20:02.742975 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:20:02.742981 | orchestrator | 2026-04-17 03:20:02.742989 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-17 03:20:02.742997 | orchestrator | Friday 17 April 2026 03:20:02 +0000 (0:00:00.597) 0:02:14.917 ********** 2026-04-17 03:20:02.743005 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:20:02.743013 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:20:02.743021 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:20:02.743028 | orchestrator | 2026-04-17 03:20:02.743036 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-17 03:20:02.743043 | orchestrator | Friday 17 April 2026 03:20:02 +0000 (0:00:00.344) 0:02:15.261 ********** 2026-04-17 03:20:02.743058 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:21:32.712782 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:21:32.712906 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:21:32.712920 | orchestrator | 2026-04-17 03:21:32.712925 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-17 03:21:32.712932 | orchestrator | Friday 17 April 2026 03:20:02 +0000 (0:00:00.305) 0:02:15.567 ********** 2026-04-17 03:21:32.712936 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:21:32.712941 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:21:32.712945 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:21:32.712949 | orchestrator | 2026-04-17 03:21:32.712953 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-17 03:21:32.712956 | orchestrator | Friday 17 April 2026 03:20:03 +0000 (0:00:00.621) 0:02:16.189 ********** 2026-04-17 03:21:32.712960 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:21:32.712964 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:21:32.712968 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:21:32.712973 | orchestrator | 2026-04-17 03:21:32.713017 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-17 03:21:32.713022 | orchestrator | Friday 17 April 2026 03:20:04 +0000 (0:00:01.332) 0:02:17.522 ********** 2026-04-17 03:21:32.713026 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:21:32.713031 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:21:32.713035 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:21:32.713040 | orchestrator | 2026-04-17 03:21:32.713044 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-17 03:21:32.713049 | orchestrator | Friday 17 April 2026 03:20:06 +0000 (0:00:01.245) 0:02:18.767 ********** 2026-04-17 03:21:32.713052 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:21:32.713056 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:21:32.713060 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:21:32.713064 | orchestrator | 2026-04-17 03:21:32.713068 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-17 03:21:32.713089 | orchestrator | 2026-04-17 03:21:32.713093 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-17 03:21:32.713097 | orchestrator | Friday 17 April 2026 03:20:15 +0000 (0:00:09.649) 0:02:28.416 ********** 2026-04-17 03:21:32.713100 | orchestrator | ok: [testbed-manager] 2026-04-17 03:21:32.713105 | orchestrator | 2026-04-17 03:21:32.713109 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-17 03:21:32.713115 | orchestrator | Friday 17 April 2026 03:20:16 +0000 (0:00:00.787) 0:02:29.204 ********** 2026-04-17 03:21:32.713121 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:32.713126 | orchestrator | 2026-04-17 03:21:32.713135 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-17 03:21:32.713144 | orchestrator | Friday 17 April 2026 03:20:17 +0000 (0:00:00.724) 0:02:29.929 ********** 2026-04-17 03:21:32.713149 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-17 03:21:32.713155 | orchestrator | 2026-04-17 03:21:32.713161 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-17 03:21:32.713167 | orchestrator | Friday 17 April 2026 03:20:17 +0000 (0:00:00.543) 0:02:30.472 ********** 2026-04-17 03:21:32.713172 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:32.713178 | orchestrator | 2026-04-17 03:21:32.713184 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-17 03:21:32.713190 | orchestrator | Friday 17 April 2026 03:20:18 +0000 (0:00:00.976) 0:02:31.448 ********** 2026-04-17 03:21:32.713196 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:32.713201 | orchestrator | 2026-04-17 03:21:32.713207 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-17 03:21:32.713213 | orchestrator | Friday 17 April 2026 03:20:19 +0000 (0:00:00.645) 0:02:32.093 ********** 2026-04-17 03:21:32.713219 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 03:21:32.713225 | orchestrator | 2026-04-17 03:21:32.713232 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-17 03:21:32.713238 | orchestrator | Friday 17 April 2026 03:20:21 +0000 (0:00:01.613) 0:02:33.707 ********** 2026-04-17 03:21:32.713244 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 03:21:32.713324 | orchestrator | 2026-04-17 03:21:32.713346 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-17 03:21:32.713358 | orchestrator | Friday 17 April 2026 03:20:21 +0000 (0:00:00.861) 0:02:34.569 ********** 2026-04-17 03:21:32.713365 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:32.713372 | orchestrator | 2026-04-17 03:21:32.713377 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-17 03:21:32.713384 | orchestrator | Friday 17 April 2026 03:20:22 +0000 (0:00:00.450) 0:02:35.019 ********** 2026-04-17 03:21:32.713389 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:32.713395 | orchestrator | 2026-04-17 03:21:32.713401 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-17 03:21:32.713407 | orchestrator | 2026-04-17 03:21:32.713413 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-17 03:21:32.713419 | orchestrator | Friday 17 April 2026 03:20:22 +0000 (0:00:00.496) 0:02:35.515 ********** 2026-04-17 03:21:32.713426 | orchestrator | ok: [testbed-manager] 2026-04-17 03:21:32.713432 | orchestrator | 2026-04-17 03:21:32.713438 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-17 03:21:32.713445 | orchestrator | Friday 17 April 2026 03:20:22 +0000 (0:00:00.167) 0:02:35.682 ********** 2026-04-17 03:21:32.713451 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 03:21:32.713458 | orchestrator | 2026-04-17 03:21:32.713465 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-17 03:21:32.713470 | orchestrator | Friday 17 April 2026 03:20:23 +0000 (0:00:00.475) 0:02:36.158 ********** 2026-04-17 03:21:32.713474 | orchestrator | ok: [testbed-manager] 2026-04-17 03:21:32.713478 | orchestrator | 2026-04-17 03:21:32.713490 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-17 03:21:32.713494 | orchestrator | Friday 17 April 2026 03:20:24 +0000 (0:00:00.857) 0:02:37.016 ********** 2026-04-17 03:21:32.713499 | orchestrator | ok: [testbed-manager] 2026-04-17 03:21:32.713504 | orchestrator | 2026-04-17 03:21:32.713521 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-17 03:21:32.713526 | orchestrator | Friday 17 April 2026 03:20:25 +0000 (0:00:01.609) 0:02:38.625 ********** 2026-04-17 03:21:32.713531 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:32.713535 | orchestrator | 2026-04-17 03:21:32.713539 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-17 03:21:32.713543 | orchestrator | Friday 17 April 2026 03:20:26 +0000 (0:00:00.841) 0:02:39.467 ********** 2026-04-17 03:21:32.713548 | orchestrator | ok: [testbed-manager] 2026-04-17 03:21:32.713552 | orchestrator | 2026-04-17 03:21:32.713556 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-17 03:21:32.713560 | orchestrator | Friday 17 April 2026 03:20:27 +0000 (0:00:00.510) 0:02:39.977 ********** 2026-04-17 03:21:32.713564 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:32.713569 | orchestrator | 2026-04-17 03:21:32.713573 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-17 03:21:32.713577 | orchestrator | Friday 17 April 2026 03:20:35 +0000 (0:00:08.277) 0:02:48.255 ********** 2026-04-17 03:21:32.713582 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:32.713586 | orchestrator | 2026-04-17 03:21:32.713590 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-17 03:21:32.713595 | orchestrator | Friday 17 April 2026 03:20:48 +0000 (0:00:12.739) 0:03:00.995 ********** 2026-04-17 03:21:32.713599 | orchestrator | ok: [testbed-manager] 2026-04-17 03:21:32.713604 | orchestrator | 2026-04-17 03:21:32.713608 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-17 03:21:32.713613 | orchestrator | 2026-04-17 03:21:32.713617 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-17 03:21:32.713622 | orchestrator | Friday 17 April 2026 03:20:49 +0000 (0:00:00.798) 0:03:01.793 ********** 2026-04-17 03:21:32.713626 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:21:32.713630 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:21:32.713635 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:21:32.713639 | orchestrator | 2026-04-17 03:21:32.713643 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-17 03:21:32.713647 | orchestrator | Friday 17 April 2026 03:20:49 +0000 (0:00:00.303) 0:03:02.097 ********** 2026-04-17 03:21:32.713652 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:32.713656 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:21:32.713660 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:21:32.713664 | orchestrator | 2026-04-17 03:21:32.713669 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-17 03:21:32.713673 | orchestrator | Friday 17 April 2026 03:20:49 +0000 (0:00:00.325) 0:03:02.422 ********** 2026-04-17 03:21:32.713678 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:21:32.713682 | orchestrator | 2026-04-17 03:21:32.713687 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-17 03:21:32.713691 | orchestrator | Friday 17 April 2026 03:20:50 +0000 (0:00:00.811) 0:03:03.233 ********** 2026-04-17 03:21:32.713696 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 03:21:32.713700 | orchestrator | 2026-04-17 03:21:32.713705 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-17 03:21:32.713709 | orchestrator | Friday 17 April 2026 03:20:51 +0000 (0:00:00.880) 0:03:04.113 ********** 2026-04-17 03:21:32.713712 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 03:21:32.713716 | orchestrator | 2026-04-17 03:21:32.713720 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-17 03:21:32.713727 | orchestrator | Friday 17 April 2026 03:20:52 +0000 (0:00:00.843) 0:03:04.957 ********** 2026-04-17 03:21:32.713731 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:32.713735 | orchestrator | 2026-04-17 03:21:32.713739 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-17 03:21:32.713743 | orchestrator | Friday 17 April 2026 03:20:52 +0000 (0:00:00.127) 0:03:05.085 ********** 2026-04-17 03:21:32.713746 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 03:21:32.713750 | orchestrator | 2026-04-17 03:21:32.713754 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-17 03:21:32.713757 | orchestrator | Friday 17 April 2026 03:20:53 +0000 (0:00:00.992) 0:03:06.077 ********** 2026-04-17 03:21:32.713762 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:32.713766 | orchestrator | 2026-04-17 03:21:32.713770 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-17 03:21:32.713774 | orchestrator | Friday 17 April 2026 03:20:53 +0000 (0:00:00.129) 0:03:06.207 ********** 2026-04-17 03:21:32.713778 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:32.713782 | orchestrator | 2026-04-17 03:21:32.713785 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-17 03:21:32.713789 | orchestrator | Friday 17 April 2026 03:20:53 +0000 (0:00:00.121) 0:03:06.328 ********** 2026-04-17 03:21:32.713793 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:32.713797 | orchestrator | 2026-04-17 03:21:32.713803 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-17 03:21:32.713813 | orchestrator | Friday 17 April 2026 03:20:53 +0000 (0:00:00.125) 0:03:06.453 ********** 2026-04-17 03:21:32.713824 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:32.713830 | orchestrator | 2026-04-17 03:21:32.713836 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-17 03:21:32.713841 | orchestrator | Friday 17 April 2026 03:20:53 +0000 (0:00:00.123) 0:03:06.577 ********** 2026-04-17 03:21:32.713847 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 03:21:32.713853 | orchestrator | 2026-04-17 03:21:32.713859 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-17 03:21:32.713867 | orchestrator | Friday 17 April 2026 03:20:59 +0000 (0:00:05.540) 0:03:12.118 ********** 2026-04-17 03:21:32.713870 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-17 03:21:32.713874 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-17 03:21:32.713882 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-17 03:21:55.463100 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-17 03:21:55.463240 | orchestrator | 2026-04-17 03:21:55.463286 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-17 03:21:55.463305 | orchestrator | Friday 17 April 2026 03:21:32 +0000 (0:00:33.287) 0:03:45.405 ********** 2026-04-17 03:21:55.463316 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 03:21:55.463326 | orchestrator | 2026-04-17 03:21:55.463336 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-17 03:21:55.463404 | orchestrator | Friday 17 April 2026 03:21:34 +0000 (0:00:01.310) 0:03:46.715 ********** 2026-04-17 03:21:55.463418 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 03:21:55.463428 | orchestrator | 2026-04-17 03:21:55.463438 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-17 03:21:55.463448 | orchestrator | Friday 17 April 2026 03:21:35 +0000 (0:00:01.407) 0:03:48.122 ********** 2026-04-17 03:21:55.463457 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 03:21:55.463467 | orchestrator | 2026-04-17 03:21:55.463477 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-17 03:21:55.463488 | orchestrator | Friday 17 April 2026 03:21:36 +0000 (0:00:01.131) 0:03:49.254 ********** 2026-04-17 03:21:55.463497 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:55.463507 | orchestrator | 2026-04-17 03:21:55.463539 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-17 03:21:55.463549 | orchestrator | Friday 17 April 2026 03:21:36 +0000 (0:00:00.116) 0:03:49.370 ********** 2026-04-17 03:21:55.463559 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-17 03:21:55.463570 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-17 03:21:55.463580 | orchestrator | 2026-04-17 03:21:55.463590 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-17 03:21:55.463599 | orchestrator | Friday 17 April 2026 03:21:38 +0000 (0:00:01.669) 0:03:51.040 ********** 2026-04-17 03:21:55.463609 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:55.463618 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:21:55.463628 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:21:55.463637 | orchestrator | 2026-04-17 03:21:55.463647 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-17 03:21:55.463656 | orchestrator | Friday 17 April 2026 03:21:38 +0000 (0:00:00.251) 0:03:51.291 ********** 2026-04-17 03:21:55.463666 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:21:55.463675 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:21:55.463685 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:21:55.463694 | orchestrator | 2026-04-17 03:21:55.463704 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-17 03:21:55.463713 | orchestrator | 2026-04-17 03:21:55.463722 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-17 03:21:55.463732 | orchestrator | Friday 17 April 2026 03:21:39 +0000 (0:00:00.786) 0:03:52.078 ********** 2026-04-17 03:21:55.463742 | orchestrator | ok: [testbed-manager] 2026-04-17 03:21:55.463758 | orchestrator | 2026-04-17 03:21:55.463780 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-17 03:21:55.463803 | orchestrator | Friday 17 April 2026 03:21:39 +0000 (0:00:00.309) 0:03:52.388 ********** 2026-04-17 03:21:55.463819 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 03:21:55.463834 | orchestrator | 2026-04-17 03:21:55.463849 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-17 03:21:55.463862 | orchestrator | Friday 17 April 2026 03:21:39 +0000 (0:00:00.218) 0:03:52.606 ********** 2026-04-17 03:21:55.463877 | orchestrator | changed: [testbed-manager] 2026-04-17 03:21:55.463893 | orchestrator | 2026-04-17 03:21:55.463908 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-17 03:21:55.463926 | orchestrator | 2026-04-17 03:21:55.463942 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-17 03:21:55.463959 | orchestrator | Friday 17 April 2026 03:21:45 +0000 (0:00:05.664) 0:03:58.270 ********** 2026-04-17 03:21:55.463975 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:21:55.463991 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:21:55.464002 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:21:55.464011 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:21:55.464021 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:21:55.464030 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:21:55.464040 | orchestrator | 2026-04-17 03:21:55.464050 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-17 03:21:55.464059 | orchestrator | Friday 17 April 2026 03:21:46 +0000 (0:00:00.606) 0:03:58.877 ********** 2026-04-17 03:21:55.464069 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 03:21:55.464078 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 03:21:55.464088 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 03:21:55.464097 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 03:21:55.464107 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 03:21:55.464128 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 03:21:55.464137 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 03:21:55.464146 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 03:21:55.464156 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 03:21:55.464165 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 03:21:55.464195 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 03:21:55.464205 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 03:21:55.464215 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 03:21:55.464225 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 03:21:55.464235 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 03:21:55.464244 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 03:21:55.464326 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 03:21:55.464339 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 03:21:55.464349 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 03:21:55.464358 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 03:21:55.464368 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 03:21:55.464377 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 03:21:55.464387 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 03:21:55.464396 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 03:21:55.464406 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 03:21:55.464415 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 03:21:55.464425 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 03:21:55.464434 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 03:21:55.464444 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 03:21:55.464453 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 03:21:55.464463 | orchestrator | 2026-04-17 03:21:55.464472 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-17 03:21:55.464482 | orchestrator | Friday 17 April 2026 03:21:54 +0000 (0:00:08.028) 0:04:06.906 ********** 2026-04-17 03:21:55.464491 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:21:55.464501 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:21:55.464510 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:21:55.464520 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:55.464529 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:21:55.464538 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:21:55.464548 | orchestrator | 2026-04-17 03:21:55.464557 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-17 03:21:55.464567 | orchestrator | Friday 17 April 2026 03:21:54 +0000 (0:00:00.556) 0:04:07.463 ********** 2026-04-17 03:21:55.464577 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:21:55.464586 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:21:55.464595 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:21:55.464612 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:21:55.464622 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:21:55.464631 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:21:55.464641 | orchestrator | 2026-04-17 03:21:55.464650 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:21:55.464660 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:21:55.464675 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-17 03:21:55.464692 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 03:21:55.464710 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 03:21:55.464737 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 03:21:55.464754 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 03:21:55.464771 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 03:21:55.464787 | orchestrator | 2026-04-17 03:21:55.464806 | orchestrator | 2026-04-17 03:21:55.464823 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:21:55.464840 | orchestrator | Friday 17 April 2026 03:21:55 +0000 (0:00:00.685) 0:04:08.149 ********** 2026-04-17 03:21:55.464856 | orchestrator | =============================================================================== 2026-04-17 03:21:55.464880 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.94s 2026-04-17 03:21:55.876748 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 33.29s 2026-04-17 03:21:55.876852 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.58s 2026-04-17 03:21:55.876868 | orchestrator | kubectl : Install required packages ------------------------------------ 12.74s 2026-04-17 03:21:55.876875 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.65s 2026-04-17 03:21:55.876881 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.28s 2026-04-17 03:21:55.876887 | orchestrator | Manage labels ----------------------------------------------------------- 8.03s 2026-04-17 03:21:55.876893 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.66s 2026-04-17 03:21:55.876899 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.54s 2026-04-17 03:21:55.876904 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.49s 2026-04-17 03:21:55.876911 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.95s 2026-04-17 03:21:55.876918 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.92s 2026-04-17 03:21:55.876924 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.33s 2026-04-17 03:21:55.876930 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.78s 2026-04-17 03:21:55.876935 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.67s 2026-04-17 03:21:55.876941 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.61s 2026-04-17 03:21:55.876947 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.61s 2026-04-17 03:21:55.876952 | orchestrator | k3s_server : Register node-token file access mode ----------------------- 1.47s 2026-04-17 03:21:55.876980 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.43s 2026-04-17 03:21:55.876986 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.41s 2026-04-17 03:21:56.189079 | orchestrator | + osism apply copy-kubeconfig 2026-04-17 03:22:08.256929 | orchestrator | 2026-04-17 03:22:08 | INFO  | Task 2de64779-a05e-4bfd-bbef-769b24ee87ce (copy-kubeconfig) was prepared for execution. 2026-04-17 03:22:08.257040 | orchestrator | 2026-04-17 03:22:08 | INFO  | It takes a moment until task 2de64779-a05e-4bfd-bbef-769b24ee87ce (copy-kubeconfig) has been started and output is visible here. 2026-04-17 03:22:15.424220 | orchestrator | 2026-04-17 03:22:15.424380 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-17 03:22:15.424397 | orchestrator | 2026-04-17 03:22:15.424407 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-17 03:22:15.424415 | orchestrator | Friday 17 April 2026 03:22:12 +0000 (0:00:00.173) 0:00:00.175 ********** 2026-04-17 03:22:15.424423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-17 03:22:15.424432 | orchestrator | 2026-04-17 03:22:15.424441 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-17 03:22:15.424447 | orchestrator | Friday 17 April 2026 03:22:13 +0000 (0:00:00.751) 0:00:00.926 ********** 2026-04-17 03:22:15.424453 | orchestrator | changed: [testbed-manager] 2026-04-17 03:22:15.424458 | orchestrator | 2026-04-17 03:22:15.424481 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-17 03:22:15.424486 | orchestrator | Friday 17 April 2026 03:22:14 +0000 (0:00:01.225) 0:00:02.152 ********** 2026-04-17 03:22:15.424492 | orchestrator | changed: [testbed-manager] 2026-04-17 03:22:15.424497 | orchestrator | 2026-04-17 03:22:15.424502 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:22:15.424513 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:22:15.424520 | orchestrator | 2026-04-17 03:22:15.424525 | orchestrator | 2026-04-17 03:22:15.424533 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:22:15.424541 | orchestrator | Friday 17 April 2026 03:22:15 +0000 (0:00:00.472) 0:00:02.625 ********** 2026-04-17 03:22:15.424552 | orchestrator | =============================================================================== 2026-04-17 03:22:15.424566 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2026-04-17 03:22:15.424573 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2026-04-17 03:22:15.424580 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2026-04-17 03:22:15.767908 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-04-17 03:22:27.959664 | orchestrator | 2026-04-17 03:22:27 | INFO  | Task 784cfad0-7e22-458d-809f-47d5f646df43 (openstackclient) was prepared for execution. 2026-04-17 03:22:27.959742 | orchestrator | 2026-04-17 03:22:27 | INFO  | It takes a moment until task 784cfad0-7e22-458d-809f-47d5f646df43 (openstackclient) has been started and output is visible here. 2026-04-17 03:23:13.831758 | orchestrator | 2026-04-17 03:23:13.831951 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-17 03:23:13.831983 | orchestrator | 2026-04-17 03:23:13.832003 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-17 03:23:13.832024 | orchestrator | Friday 17 April 2026 03:22:32 +0000 (0:00:00.242) 0:00:00.242 ********** 2026-04-17 03:23:13.832045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-17 03:23:13.832064 | orchestrator | 2026-04-17 03:23:13.832075 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-17 03:23:13.832087 | orchestrator | Friday 17 April 2026 03:22:32 +0000 (0:00:00.274) 0:00:00.517 ********** 2026-04-17 03:23:13.832128 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-17 03:23:13.832170 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-17 03:23:13.832191 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-17 03:23:13.832208 | orchestrator | 2026-04-17 03:23:13.832227 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-17 03:23:13.832246 | orchestrator | Friday 17 April 2026 03:22:34 +0000 (0:00:01.201) 0:00:01.718 ********** 2026-04-17 03:23:13.832265 | orchestrator | changed: [testbed-manager] 2026-04-17 03:23:13.832484 | orchestrator | 2026-04-17 03:23:13.832568 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-17 03:23:13.832589 | orchestrator | Friday 17 April 2026 03:22:35 +0000 (0:00:01.236) 0:00:02.955 ********** 2026-04-17 03:23:13.832606 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-17 03:23:13.832622 | orchestrator | ok: [testbed-manager] 2026-04-17 03:23:13.832641 | orchestrator | 2026-04-17 03:23:13.832659 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-17 03:23:13.832676 | orchestrator | Friday 17 April 2026 03:23:08 +0000 (0:00:33.395) 0:00:36.351 ********** 2026-04-17 03:23:13.832695 | orchestrator | changed: [testbed-manager] 2026-04-17 03:23:13.832714 | orchestrator | 2026-04-17 03:23:13.832732 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-17 03:23:13.832751 | orchestrator | Friday 17 April 2026 03:23:09 +0000 (0:00:00.983) 0:00:37.334 ********** 2026-04-17 03:23:13.832762 | orchestrator | ok: [testbed-manager] 2026-04-17 03:23:13.832773 | orchestrator | 2026-04-17 03:23:13.832784 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-17 03:23:13.832795 | orchestrator | Friday 17 April 2026 03:23:10 +0000 (0:00:00.635) 0:00:37.970 ********** 2026-04-17 03:23:13.832806 | orchestrator | changed: [testbed-manager] 2026-04-17 03:23:13.832816 | orchestrator | 2026-04-17 03:23:13.832827 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-17 03:23:13.832838 | orchestrator | Friday 17 April 2026 03:23:11 +0000 (0:00:01.510) 0:00:39.480 ********** 2026-04-17 03:23:13.832849 | orchestrator | changed: [testbed-manager] 2026-04-17 03:23:13.832860 | orchestrator | 2026-04-17 03:23:13.832871 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-17 03:23:13.832881 | orchestrator | Friday 17 April 2026 03:23:12 +0000 (0:00:00.692) 0:00:40.173 ********** 2026-04-17 03:23:13.832892 | orchestrator | changed: [testbed-manager] 2026-04-17 03:23:13.832903 | orchestrator | 2026-04-17 03:23:13.832913 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-17 03:23:13.832924 | orchestrator | Friday 17 April 2026 03:23:13 +0000 (0:00:00.550) 0:00:40.724 ********** 2026-04-17 03:23:13.832934 | orchestrator | ok: [testbed-manager] 2026-04-17 03:23:13.832945 | orchestrator | 2026-04-17 03:23:13.832955 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:23:13.832966 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:23:13.832978 | orchestrator | 2026-04-17 03:23:13.832989 | orchestrator | 2026-04-17 03:23:13.832999 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:23:13.833010 | orchestrator | Friday 17 April 2026 03:23:13 +0000 (0:00:00.385) 0:00:41.109 ********** 2026-04-17 03:23:13.833020 | orchestrator | =============================================================================== 2026-04-17 03:23:13.833037 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.40s 2026-04-17 03:23:13.833055 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.51s 2026-04-17 03:23:13.833073 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.24s 2026-04-17 03:23:13.833113 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.20s 2026-04-17 03:23:13.833132 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.98s 2026-04-17 03:23:13.833150 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.69s 2026-04-17 03:23:13.833168 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.64s 2026-04-17 03:23:13.833187 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.55s 2026-04-17 03:23:13.833207 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.39s 2026-04-17 03:23:13.833227 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.27s 2026-04-17 03:23:16.102000 | orchestrator | 2026-04-17 03:23:16 | INFO  | Task b2c8aa7a-dbbb-490a-aa91-55e15796106c (common) was prepared for execution. 2026-04-17 03:23:16.102157 | orchestrator | 2026-04-17 03:23:16 | INFO  | It takes a moment until task b2c8aa7a-dbbb-490a-aa91-55e15796106c (common) has been started and output is visible here. 2026-04-17 03:23:28.138698 | orchestrator | 2026-04-17 03:23:28.138816 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-17 03:23:28.138831 | orchestrator | 2026-04-17 03:23:28.138838 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-17 03:23:28.138845 | orchestrator | Friday 17 April 2026 03:23:20 +0000 (0:00:00.274) 0:00:00.274 ********** 2026-04-17 03:23:28.138852 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:23:28.138860 | orchestrator | 2026-04-17 03:23:28.138866 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-17 03:23:28.138872 | orchestrator | Friday 17 April 2026 03:23:21 +0000 (0:00:01.305) 0:00:01.579 ********** 2026-04-17 03:23:28.138879 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 03:23:28.138885 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 03:23:28.138892 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 03:23:28.138898 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 03:23:28.138904 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 03:23:28.138910 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 03:23:28.138916 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 03:23:28.138922 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 03:23:28.138928 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 03:23:28.138949 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 03:23:28.138957 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 03:23:28.138963 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 03:23:28.138969 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 03:23:28.138975 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 03:23:28.138981 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 03:23:28.138987 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 03:23:28.138993 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 03:23:28.139000 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 03:23:28.139024 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 03:23:28.139030 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 03:23:28.139036 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 03:23:28.139042 | orchestrator | 2026-04-17 03:23:28.139049 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-17 03:23:28.139055 | orchestrator | Friday 17 April 2026 03:23:24 +0000 (0:00:02.670) 0:00:04.250 ********** 2026-04-17 03:23:28.139061 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:23:28.139069 | orchestrator | 2026-04-17 03:23:28.139075 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-17 03:23:28.139081 | orchestrator | Friday 17 April 2026 03:23:25 +0000 (0:00:01.272) 0:00:05.522 ********** 2026-04-17 03:23:28.139093 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:28.139102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:28.139126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:28.139134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:28.139140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:28.139147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:28.139159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:28.139166 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:28.139173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:28.139191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184254 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:29.184391 | orchestrator | 2026-04-17 03:23:29.184399 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-17 03:23:29.184406 | orchestrator | Friday 17 April 2026 03:23:28 +0000 (0:00:03.475) 0:00:08.997 ********** 2026-04-17 03:23:29.184418 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:29.184430 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.184444 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.184453 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:23:29.184464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:29.184486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736565 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:23:29.736636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:29.736652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736673 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:23:29.736683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:29.736705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736726 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:23:29.736757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:29.736776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736801 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:23:29.736813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:29.736825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:29.736847 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:23:29.736859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:29.736878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555417 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:23:30.555429 | orchestrator | 2026-04-17 03:23:30.555437 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-17 03:23:30.555445 | orchestrator | Friday 17 April 2026 03:23:29 +0000 (0:00:00.836) 0:00:09.834 ********** 2026-04-17 03:23:30.555455 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:30.555464 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:30.555505 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:23:30.555516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555549 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:23:30.555577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:30.555585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555600 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:23:30.555606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:30.555613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:30.555627 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:23:30.555636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:30.555660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:35.264670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:35.264801 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:23:35.264833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:35.264857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:35.264878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:35.264898 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:23:35.264917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 03:23:35.264937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:35.264985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:35.264997 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:23:35.265009 | orchestrator | 2026-04-17 03:23:35.265021 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-17 03:23:35.265033 | orchestrator | Friday 17 April 2026 03:23:31 +0000 (0:00:01.678) 0:00:11.512 ********** 2026-04-17 03:23:35.265044 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:23:35.265055 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:23:35.265066 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:23:35.265076 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:23:35.265106 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:23:35.265118 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:23:35.265129 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:23:35.265154 | orchestrator | 2026-04-17 03:23:35.265168 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-17 03:23:35.265181 | orchestrator | Friday 17 April 2026 03:23:32 +0000 (0:00:00.654) 0:00:12.167 ********** 2026-04-17 03:23:35.265193 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:23:35.265206 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:23:35.265219 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:23:35.265231 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:23:35.265241 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:23:35.265252 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:23:35.265263 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:23:35.265273 | orchestrator | 2026-04-17 03:23:35.265314 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-17 03:23:35.265328 | orchestrator | Friday 17 April 2026 03:23:32 +0000 (0:00:00.800) 0:00:12.967 ********** 2026-04-17 03:23:35.265340 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:35.265375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:35.265395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:35.265418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:35.265434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:35.265446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:35.265473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:38.030253 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030454 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:38.030494 | orchestrator | 2026-04-17 03:23:38.030499 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-17 03:23:38.030505 | orchestrator | Friday 17 April 2026 03:23:36 +0000 (0:00:03.321) 0:00:16.289 ********** 2026-04-17 03:23:38.030510 | orchestrator | [WARNING]: Skipped 2026-04-17 03:23:38.030516 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-17 03:23:38.030522 | orchestrator | to this access issue: 2026-04-17 03:23:38.030527 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-17 03:23:38.030532 | orchestrator | directory 2026-04-17 03:23:38.030536 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 03:23:38.030541 | orchestrator | 2026-04-17 03:23:38.030546 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-17 03:23:38.030550 | orchestrator | Friday 17 April 2026 03:23:37 +0000 (0:00:00.933) 0:00:17.222 ********** 2026-04-17 03:23:38.030555 | orchestrator | [WARNING]: Skipped 2026-04-17 03:23:38.030562 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-17 03:23:47.848285 | orchestrator | to this access issue: 2026-04-17 03:23:47.848501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-17 03:23:47.848522 | orchestrator | directory 2026-04-17 03:23:47.848536 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 03:23:47.848548 | orchestrator | 2026-04-17 03:23:47.848560 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-17 03:23:47.848573 | orchestrator | Friday 17 April 2026 03:23:38 +0000 (0:00:01.176) 0:00:18.399 ********** 2026-04-17 03:23:47.848584 | orchestrator | [WARNING]: Skipped 2026-04-17 03:23:47.848622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-17 03:23:47.848634 | orchestrator | to this access issue: 2026-04-17 03:23:47.848645 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-17 03:23:47.848656 | orchestrator | directory 2026-04-17 03:23:47.848666 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 03:23:47.848677 | orchestrator | 2026-04-17 03:23:47.848688 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-17 03:23:47.848699 | orchestrator | Friday 17 April 2026 03:23:39 +0000 (0:00:00.843) 0:00:19.242 ********** 2026-04-17 03:23:47.848710 | orchestrator | [WARNING]: Skipped 2026-04-17 03:23:47.848720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-17 03:23:47.848731 | orchestrator | to this access issue: 2026-04-17 03:23:47.848742 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-17 03:23:47.848753 | orchestrator | directory 2026-04-17 03:23:47.848764 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 03:23:47.848774 | orchestrator | 2026-04-17 03:23:47.848785 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-17 03:23:47.848798 | orchestrator | Friday 17 April 2026 03:23:39 +0000 (0:00:00.809) 0:00:20.051 ********** 2026-04-17 03:23:47.848810 | orchestrator | changed: [testbed-manager] 2026-04-17 03:23:47.848824 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:23:47.848836 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:23:47.848849 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:23:47.848861 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:23:47.848893 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:23:47.848906 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:23:47.848918 | orchestrator | 2026-04-17 03:23:47.848930 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-17 03:23:47.848943 | orchestrator | Friday 17 April 2026 03:23:42 +0000 (0:00:02.626) 0:00:22.677 ********** 2026-04-17 03:23:47.848955 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 03:23:47.848968 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 03:23:47.848981 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 03:23:47.848992 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 03:23:47.849005 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 03:23:47.849033 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 03:23:47.849057 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 03:23:47.849071 | orchestrator | 2026-04-17 03:23:47.849081 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-17 03:23:47.849100 | orchestrator | Friday 17 April 2026 03:23:44 +0000 (0:00:02.301) 0:00:24.978 ********** 2026-04-17 03:23:47.849112 | orchestrator | changed: [testbed-manager] 2026-04-17 03:23:47.849122 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:23:47.849133 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:23:47.849144 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:23:47.849155 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:23:47.849166 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:23:47.849176 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:23:47.849187 | orchestrator | 2026-04-17 03:23:47.849198 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-17 03:23:47.849209 | orchestrator | Friday 17 April 2026 03:23:46 +0000 (0:00:01.945) 0:00:26.924 ********** 2026-04-17 03:23:47.849223 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:47.849266 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:47.849279 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:47.849312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:47.849324 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:47.849336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:47.849352 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:47.849371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:47.849392 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:47.849413 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:53.791662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:53.791811 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:53.791831 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:53.791860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:53.791900 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:53.791913 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:53.791924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:23:53.791955 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:53.791968 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:53.791979 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:53.791990 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:53.792003 | orchestrator | 2026-04-17 03:23:53.792016 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-17 03:23:53.792029 | orchestrator | Friday 17 April 2026 03:23:48 +0000 (0:00:01.466) 0:00:28.391 ********** 2026-04-17 03:23:53.792040 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 03:23:53.792051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 03:23:53.792062 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 03:23:53.792081 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 03:23:53.792097 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 03:23:53.792115 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 03:23:53.792133 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 03:23:53.792152 | orchestrator | 2026-04-17 03:23:53.792178 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-17 03:23:53.792199 | orchestrator | Friday 17 April 2026 03:23:50 +0000 (0:00:01.883) 0:00:30.275 ********** 2026-04-17 03:23:53.792217 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 03:23:53.792235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 03:23:53.792266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 03:23:53.792283 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 03:23:53.792376 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 03:23:53.792397 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 03:23:53.792414 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 03:23:53.792433 | orchestrator | 2026-04-17 03:23:53.792451 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-17 03:23:53.792470 | orchestrator | Friday 17 April 2026 03:23:51 +0000 (0:00:01.670) 0:00:31.945 ********** 2026-04-17 03:23:53.792487 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:53.792514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:54.271914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:54.272015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:54.272051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:54.272076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:54.272087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 03:23:54.272098 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272227 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:23:54.272244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:25:11.926858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:25:11.926979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:25:11.926990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:25:11.926998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:25:11.927006 | orchestrator | 2026-04-17 03:25:11.927014 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-17 03:25:11.927024 | orchestrator | Friday 17 April 2026 03:23:54 +0000 (0:00:02.427) 0:00:34.373 ********** 2026-04-17 03:25:11.927031 | orchestrator | changed: [testbed-manager] 2026-04-17 03:25:11.927039 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:25:11.927046 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:25:11.927052 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:25:11.927059 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:25:11.927066 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:25:11.927073 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:25:11.927080 | orchestrator | 2026-04-17 03:25:11.927087 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-17 03:25:11.927094 | orchestrator | Friday 17 April 2026 03:23:55 +0000 (0:00:01.132) 0:00:35.506 ********** 2026-04-17 03:25:11.927100 | orchestrator | changed: [testbed-manager] 2026-04-17 03:25:11.927104 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:25:11.927108 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:25:11.927113 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:25:11.927117 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:25:11.927121 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:25:11.927125 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:25:11.927129 | orchestrator | 2026-04-17 03:25:11.927133 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 03:25:11.927137 | orchestrator | Friday 17 April 2026 03:23:56 +0000 (0:00:00.947) 0:00:36.454 ********** 2026-04-17 03:25:11.927141 | orchestrator | 2026-04-17 03:25:11.927145 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 03:25:11.927150 | orchestrator | Friday 17 April 2026 03:23:56 +0000 (0:00:00.059) 0:00:36.513 ********** 2026-04-17 03:25:11.927154 | orchestrator | 2026-04-17 03:25:11.927158 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 03:25:11.927162 | orchestrator | Friday 17 April 2026 03:23:56 +0000 (0:00:00.059) 0:00:36.572 ********** 2026-04-17 03:25:11.927166 | orchestrator | 2026-04-17 03:25:11.927170 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 03:25:11.927174 | orchestrator | Friday 17 April 2026 03:23:56 +0000 (0:00:00.059) 0:00:36.632 ********** 2026-04-17 03:25:11.927178 | orchestrator | 2026-04-17 03:25:11.927182 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 03:25:11.927186 | orchestrator | Friday 17 April 2026 03:23:56 +0000 (0:00:00.166) 0:00:36.798 ********** 2026-04-17 03:25:11.927208 | orchestrator | 2026-04-17 03:25:11.927212 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 03:25:11.927222 | orchestrator | Friday 17 April 2026 03:23:56 +0000 (0:00:00.058) 0:00:36.857 ********** 2026-04-17 03:25:11.927226 | orchestrator | 2026-04-17 03:25:11.927230 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 03:25:11.927235 | orchestrator | Friday 17 April 2026 03:23:56 +0000 (0:00:00.055) 0:00:36.912 ********** 2026-04-17 03:25:11.927239 | orchestrator | 2026-04-17 03:25:11.927243 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-17 03:25:11.927247 | orchestrator | Friday 17 April 2026 03:23:56 +0000 (0:00:00.083) 0:00:36.995 ********** 2026-04-17 03:25:11.927251 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:25:11.927255 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:25:11.927260 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:25:11.927264 | orchestrator | changed: [testbed-manager] 2026-04-17 03:25:11.927268 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:25:11.927283 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:25:11.927287 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:25:11.927291 | orchestrator | 2026-04-17 03:25:11.927295 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-17 03:25:11.927337 | orchestrator | Friday 17 April 2026 03:24:33 +0000 (0:00:36.390) 0:01:13.385 ********** 2026-04-17 03:25:11.927342 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:25:11.927346 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:25:11.927350 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:25:11.927354 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:25:11.927358 | orchestrator | changed: [testbed-manager] 2026-04-17 03:25:11.927362 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:25:11.927366 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:25:11.927370 | orchestrator | 2026-04-17 03:25:11.927375 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-17 03:25:11.927379 | orchestrator | Friday 17 April 2026 03:25:06 +0000 (0:00:33.364) 0:01:46.750 ********** 2026-04-17 03:25:11.927383 | orchestrator | ok: [testbed-manager] 2026-04-17 03:25:11.927388 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:25:11.927392 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:25:11.927396 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:25:11.927400 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:25:11.927404 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:25:11.927409 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:25:11.927414 | orchestrator | 2026-04-17 03:25:11.927419 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-17 03:25:11.927423 | orchestrator | Friday 17 April 2026 03:25:08 +0000 (0:00:01.831) 0:01:48.581 ********** 2026-04-17 03:25:11.927428 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:25:11.927432 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:25:11.927437 | orchestrator | changed: [testbed-manager] 2026-04-17 03:25:11.927442 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:25:11.927446 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:25:11.927451 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:25:11.927455 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:25:11.927460 | orchestrator | 2026-04-17 03:25:11.927464 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:25:11.927470 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 03:25:11.927490 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 03:25:11.927497 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 03:25:11.927502 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 03:25:11.927511 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 03:25:11.927515 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 03:25:11.927520 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 03:25:11.927525 | orchestrator | 2026-04-17 03:25:11.927530 | orchestrator | 2026-04-17 03:25:11.927534 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:25:11.927539 | orchestrator | Friday 17 April 2026 03:25:11 +0000 (0:00:03.414) 0:01:51.996 ********** 2026-04-17 03:25:11.927544 | orchestrator | =============================================================================== 2026-04-17 03:25:11.927549 | orchestrator | common : Restart fluentd container ------------------------------------- 36.39s 2026-04-17 03:25:11.927553 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 33.36s 2026-04-17 03:25:11.927559 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.48s 2026-04-17 03:25:11.927563 | orchestrator | common : Restart cron container ----------------------------------------- 3.41s 2026-04-17 03:25:11.927568 | orchestrator | common : Copying over config.json files for services -------------------- 3.32s 2026-04-17 03:25:11.927572 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.67s 2026-04-17 03:25:11.927577 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.63s 2026-04-17 03:25:11.927581 | orchestrator | common : Check common containers ---------------------------------------- 2.43s 2026-04-17 03:25:11.927586 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.30s 2026-04-17 03:25:11.927591 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.95s 2026-04-17 03:25:11.927595 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.88s 2026-04-17 03:25:11.927600 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.83s 2026-04-17 03:25:11.927604 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.68s 2026-04-17 03:25:11.927609 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.67s 2026-04-17 03:25:11.927614 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.47s 2026-04-17 03:25:11.927618 | orchestrator | common : include_tasks -------------------------------------------------- 1.31s 2026-04-17 03:25:11.927627 | orchestrator | common : include_tasks -------------------------------------------------- 1.27s 2026-04-17 03:25:12.203758 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.18s 2026-04-17 03:25:12.203837 | orchestrator | common : Creating log volume -------------------------------------------- 1.13s 2026-04-17 03:25:12.203845 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 0.95s 2026-04-17 03:25:14.619677 | orchestrator | 2026-04-17 03:25:14 | INFO  | Task 1ee9bbe1-c833-432d-a453-7d23c5587ce8 (loadbalancer) was prepared for execution. 2026-04-17 03:25:14.619770 | orchestrator | 2026-04-17 03:25:14 | INFO  | It takes a moment until task 1ee9bbe1-c833-432d-a453-7d23c5587ce8 (loadbalancer) has been started and output is visible here. 2026-04-17 03:25:28.911278 | orchestrator | 2026-04-17 03:25:28.911407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:25:28.911419 | orchestrator | 2026-04-17 03:25:28.911425 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 03:25:28.911432 | orchestrator | Friday 17 April 2026 03:25:19 +0000 (0:00:00.252) 0:00:00.252 ********** 2026-04-17 03:25:28.911439 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:25:28.911466 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:25:28.911473 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:25:28.911479 | orchestrator | 2026-04-17 03:25:28.911486 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 03:25:28.911493 | orchestrator | Friday 17 April 2026 03:25:19 +0000 (0:00:00.269) 0:00:00.521 ********** 2026-04-17 03:25:28.911500 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-17 03:25:28.911506 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-17 03:25:28.911513 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-17 03:25:28.911520 | orchestrator | 2026-04-17 03:25:28.911527 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-17 03:25:28.911534 | orchestrator | 2026-04-17 03:25:28.911540 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-17 03:25:28.911547 | orchestrator | Friday 17 April 2026 03:25:20 +0000 (0:00:00.470) 0:00:00.992 ********** 2026-04-17 03:25:28.911554 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:25:28.911561 | orchestrator | 2026-04-17 03:25:28.911582 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-17 03:25:28.911589 | orchestrator | Friday 17 April 2026 03:25:20 +0000 (0:00:00.554) 0:00:01.547 ********** 2026-04-17 03:25:28.911595 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:25:28.911601 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:25:28.911608 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:25:28.911615 | orchestrator | 2026-04-17 03:25:28.911621 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-17 03:25:28.911628 | orchestrator | Friday 17 April 2026 03:25:21 +0000 (0:00:00.580) 0:00:02.127 ********** 2026-04-17 03:25:28.911635 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:25:28.911641 | orchestrator | 2026-04-17 03:25:28.911648 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-17 03:25:28.911654 | orchestrator | Friday 17 April 2026 03:25:21 +0000 (0:00:00.655) 0:00:02.783 ********** 2026-04-17 03:25:28.911661 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:25:28.911667 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:25:28.911673 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:25:28.911680 | orchestrator | 2026-04-17 03:25:28.911686 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-17 03:25:28.911693 | orchestrator | Friday 17 April 2026 03:25:22 +0000 (0:00:00.648) 0:00:03.431 ********** 2026-04-17 03:25:28.911700 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 03:25:28.911707 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 03:25:28.911714 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 03:25:28.911720 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 03:25:28.911727 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 03:25:28.911733 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 03:25:28.911740 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 03:25:28.911748 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 03:25:28.911754 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 03:25:28.911761 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 03:25:28.911767 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 03:25:28.911773 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 03:25:28.911787 | orchestrator | 2026-04-17 03:25:28.911795 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 03:25:28.911802 | orchestrator | Friday 17 April 2026 03:25:24 +0000 (0:00:02.073) 0:00:05.504 ********** 2026-04-17 03:25:28.911808 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-17 03:25:28.911816 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-17 03:25:28.911822 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-17 03:25:28.911829 | orchestrator | 2026-04-17 03:25:28.911836 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 03:25:28.911843 | orchestrator | Friday 17 April 2026 03:25:25 +0000 (0:00:00.712) 0:00:06.217 ********** 2026-04-17 03:25:28.911850 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-17 03:25:28.911857 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-17 03:25:28.911863 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-17 03:25:28.911878 | orchestrator | 2026-04-17 03:25:28.911885 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 03:25:28.911891 | orchestrator | Friday 17 April 2026 03:25:26 +0000 (0:00:01.227) 0:00:07.444 ********** 2026-04-17 03:25:28.911898 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-17 03:25:28.911905 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:28.911927 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-17 03:25:28.911934 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:28.911941 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-17 03:25:28.911947 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:25:28.911954 | orchestrator | 2026-04-17 03:25:28.911960 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-17 03:25:28.911968 | orchestrator | Friday 17 April 2026 03:25:27 +0000 (0:00:00.483) 0:00:07.927 ********** 2026-04-17 03:25:28.911977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:28.911994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:28.912002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:28.912014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:28.912021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:28.912033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:33.976669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:33.976771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:33.976781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:33.976788 | orchestrator | 2026-04-17 03:25:33.976796 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-17 03:25:33.976804 | orchestrator | Friday 17 April 2026 03:25:28 +0000 (0:00:01.755) 0:00:09.683 ********** 2026-04-17 03:25:33.976810 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:25:33.976818 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:25:33.976823 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:25:33.976848 | orchestrator | 2026-04-17 03:25:33.976856 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-17 03:25:33.976862 | orchestrator | Friday 17 April 2026 03:25:29 +0000 (0:00:00.917) 0:00:10.600 ********** 2026-04-17 03:25:33.976868 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-17 03:25:33.976875 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-17 03:25:33.976881 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-17 03:25:33.976887 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-17 03:25:33.976893 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-17 03:25:33.976900 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-17 03:25:33.976906 | orchestrator | 2026-04-17 03:25:33.976912 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-17 03:25:33.976918 | orchestrator | Friday 17 April 2026 03:25:31 +0000 (0:00:01.406) 0:00:12.007 ********** 2026-04-17 03:25:33.976925 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:25:33.976931 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:25:33.976937 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:25:33.976943 | orchestrator | 2026-04-17 03:25:33.976950 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-17 03:25:33.976956 | orchestrator | Friday 17 April 2026 03:25:32 +0000 (0:00:00.890) 0:00:12.898 ********** 2026-04-17 03:25:33.976963 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:25:33.976969 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:25:33.976976 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:25:33.976982 | orchestrator | 2026-04-17 03:25:33.976988 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-17 03:25:33.976994 | orchestrator | Friday 17 April 2026 03:25:33 +0000 (0:00:01.266) 0:00:14.165 ********** 2026-04-17 03:25:33.977001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:25:33.977024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:33.977031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:33.977040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 03:25:33.977053 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:33.977060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:25:33.977098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:33.977105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:33.977112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 03:25:33.977118 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:33.977129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:25:36.630815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:36.630947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:36.630961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 03:25:36.630969 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:25:36.630977 | orchestrator | 2026-04-17 03:25:36.630985 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-17 03:25:36.630992 | orchestrator | Friday 17 April 2026 03:25:33 +0000 (0:00:00.582) 0:00:14.747 ********** 2026-04-17 03:25:36.630999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:36.631006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:36.631013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:36.631052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:36.631061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:36.631068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 03:25:36.631074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:36.631081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:36.631087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 03:25:36.631106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:44.676119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:44.676217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378', '__omit_place_holder__b16ebed5000f18d04d7cd95ebf3084783b24e378'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 03:25:44.676227 | orchestrator | 2026-04-17 03:25:44.676235 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-17 03:25:44.676244 | orchestrator | Friday 17 April 2026 03:25:36 +0000 (0:00:02.653) 0:00:17.401 ********** 2026-04-17 03:25:44.676251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:44.676260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:44.676268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:44.676297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:44.676343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:44.676351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:44.676358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:44.676366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:44.676373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:44.676380 | orchestrator | 2026-04-17 03:25:44.676386 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-17 03:25:44.676393 | orchestrator | Friday 17 April 2026 03:25:39 +0000 (0:00:03.036) 0:00:20.438 ********** 2026-04-17 03:25:44.676400 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 03:25:44.676415 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 03:25:44.676421 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 03:25:44.676428 | orchestrator | 2026-04-17 03:25:44.676435 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-17 03:25:44.676442 | orchestrator | Friday 17 April 2026 03:25:41 +0000 (0:00:01.779) 0:00:22.218 ********** 2026-04-17 03:25:44.676448 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 03:25:44.676455 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 03:25:44.676461 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 03:25:44.676468 | orchestrator | 2026-04-17 03:25:44.676474 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-17 03:25:44.676481 | orchestrator | Friday 17 April 2026 03:25:44 +0000 (0:00:02.711) 0:00:24.929 ********** 2026-04-17 03:25:44.676488 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:44.676496 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:44.676502 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:25:44.676508 | orchestrator | 2026-04-17 03:25:44.676520 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-17 03:25:55.787589 | orchestrator | Friday 17 April 2026 03:25:44 +0000 (0:00:00.523) 0:00:25.453 ********** 2026-04-17 03:25:55.787697 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 03:25:55.787722 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 03:25:55.787730 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 03:25:55.787736 | orchestrator | 2026-04-17 03:25:55.787743 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-17 03:25:55.787750 | orchestrator | Friday 17 April 2026 03:25:46 +0000 (0:00:01.968) 0:00:27.421 ********** 2026-04-17 03:25:55.787757 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 03:25:55.787764 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 03:25:55.787770 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 03:25:55.787777 | orchestrator | 2026-04-17 03:25:55.787783 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-17 03:25:55.787790 | orchestrator | Friday 17 April 2026 03:25:48 +0000 (0:00:02.073) 0:00:29.495 ********** 2026-04-17 03:25:55.787797 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-17 03:25:55.787804 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-17 03:25:55.787811 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-17 03:25:55.787817 | orchestrator | 2026-04-17 03:25:55.787833 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-17 03:25:55.787840 | orchestrator | Friday 17 April 2026 03:25:50 +0000 (0:00:01.369) 0:00:30.864 ********** 2026-04-17 03:25:55.787847 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-17 03:25:55.787854 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-17 03:25:55.787860 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-17 03:25:55.787867 | orchestrator | 2026-04-17 03:25:55.787873 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-17 03:25:55.787879 | orchestrator | Friday 17 April 2026 03:25:51 +0000 (0:00:01.365) 0:00:32.230 ********** 2026-04-17 03:25:55.787905 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:25:55.787913 | orchestrator | 2026-04-17 03:25:55.787920 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-17 03:25:55.787926 | orchestrator | Friday 17 April 2026 03:25:51 +0000 (0:00:00.526) 0:00:32.756 ********** 2026-04-17 03:25:55.787935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:55.787944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:55.787951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 03:25:55.787979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:55.787986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:55.787992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:25:55.788006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:55.788013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:55.788019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:25:55.788026 | orchestrator | 2026-04-17 03:25:55.788032 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-17 03:25:55.788039 | orchestrator | Friday 17 April 2026 03:25:55 +0000 (0:00:03.200) 0:00:35.957 ********** 2026-04-17 03:25:55.788056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:25:56.576070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:56.576192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:56.576233 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:56.576248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:25:56.576260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:56.576271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:56.576282 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:56.576292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:25:56.576410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:56.576426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:56.576449 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:25:56.576459 | orchestrator | 2026-04-17 03:25:56.576468 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-17 03:25:56.576503 | orchestrator | Friday 17 April 2026 03:25:55 +0000 (0:00:00.607) 0:00:36.564 ********** 2026-04-17 03:25:56.576529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:25:56.576539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:56.576560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:56.576571 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:56.576593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:25:56.576631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:57.437828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:57.437948 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:57.437965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:25:57.437978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:57.437988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:57.437997 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:25:57.438006 | orchestrator | 2026-04-17 03:25:57.438066 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-17 03:25:57.438078 | orchestrator | Friday 17 April 2026 03:25:56 +0000 (0:00:00.782) 0:00:37.347 ********** 2026-04-17 03:25:57.438088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:25:57.438098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:57.438126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:57.438144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:57.438153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:25:57.438163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:57.438172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:57.438182 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:57.438190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:25:57.438237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:57.438253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:57.438277 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:25:58.847996 | orchestrator | 2026-04-17 03:25:58.848103 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-17 03:25:58.848139 | orchestrator | Friday 17 April 2026 03:25:57 +0000 (0:00:00.857) 0:00:38.204 ********** 2026-04-17 03:25:58.848151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:25:58.848162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:58.848169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:58.848174 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:58.848179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:25:58.848183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:58.848207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:58.848245 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:58.848270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:25:58.848277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:58.848283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:58.848290 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:25:58.848296 | orchestrator | 2026-04-17 03:25:58.848323 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-17 03:25:58.848330 | orchestrator | Friday 17 April 2026 03:25:58 +0000 (0:00:00.588) 0:00:38.792 ********** 2026-04-17 03:25:58.848336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:25:58.848343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:58.848350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:58.848363 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:58.848378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:25:59.923891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:59.923988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:59.923998 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:59.924006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:25:59.924012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:59.924018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:59.924041 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:25:59.924047 | orchestrator | 2026-04-17 03:25:59.924055 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-17 03:25:59.924062 | orchestrator | Friday 17 April 2026 03:25:58 +0000 (0:00:00.830) 0:00:39.623 ********** 2026-04-17 03:25:59.924078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:25:59.924100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:59.924110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:59.924122 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:25:59.924133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:25:59.924144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:25:59.924152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:25:59.924168 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:25:59.924178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:25:59.924197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:26:01.302132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:26:01.302207 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:01.302215 | orchestrator | 2026-04-17 03:26:01.302221 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-17 03:26:01.302226 | orchestrator | Friday 17 April 2026 03:25:59 +0000 (0:00:01.067) 0:00:40.690 ********** 2026-04-17 03:26:01.302232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:26:01.302238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:26:01.302243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:26:01.302261 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:01.302266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:26:01.302283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:26:01.302322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:26:01.302331 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:01.302338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:26:01.302343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:26:01.302350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:26:01.302363 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:01.302370 | orchestrator | 2026-04-17 03:26:01.302375 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-17 03:26:01.302381 | orchestrator | Friday 17 April 2026 03:26:00 +0000 (0:00:00.574) 0:00:41.264 ********** 2026-04-17 03:26:01.302387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 03:26:01.302394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:26:01.302412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:26:07.679865 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:07.679971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 03:26:07.679989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:26:07.680000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:26:07.680032 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:07.680044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 03:26:07.680054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 03:26:07.680078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 03:26:07.680089 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:07.680099 | orchestrator | 2026-04-17 03:26:07.680117 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-17 03:26:07.680136 | orchestrator | Friday 17 April 2026 03:26:01 +0000 (0:00:00.811) 0:00:42.075 ********** 2026-04-17 03:26:07.680152 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 03:26:07.680191 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 03:26:07.680208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 03:26:07.680223 | orchestrator | 2026-04-17 03:26:07.680239 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-17 03:26:07.680256 | orchestrator | Friday 17 April 2026 03:26:02 +0000 (0:00:01.700) 0:00:43.776 ********** 2026-04-17 03:26:07.680274 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 03:26:07.680288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 03:26:07.680328 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 03:26:07.680345 | orchestrator | 2026-04-17 03:26:07.680363 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-17 03:26:07.680394 | orchestrator | Friday 17 April 2026 03:26:04 +0000 (0:00:01.631) 0:00:45.407 ********** 2026-04-17 03:26:07.680442 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 03:26:07.680459 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 03:26:07.680475 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 03:26:07.680491 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 03:26:07.680507 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:07.680523 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 03:26:07.680540 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:07.680556 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 03:26:07.680572 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:07.680586 | orchestrator | 2026-04-17 03:26:07.680596 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-17 03:26:07.680605 | orchestrator | Friday 17 April 2026 03:26:05 +0000 (0:00:00.784) 0:00:46.192 ********** 2026-04-17 03:26:07.680616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 03:26:07.680628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 03:26:07.680646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 03:26:07.680690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:26:11.785629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:26:11.785774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 03:26:11.785785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:26:11.785793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:26:11.785800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 03:26:11.785806 | orchestrator | 2026-04-17 03:26:11.785815 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-17 03:26:11.785823 | orchestrator | Friday 17 April 2026 03:26:07 +0000 (0:00:02.261) 0:00:48.453 ********** 2026-04-17 03:26:11.785846 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:26:11.785853 | orchestrator | 2026-04-17 03:26:11.785859 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-17 03:26:11.785865 | orchestrator | Friday 17 April 2026 03:26:08 +0000 (0:00:00.777) 0:00:49.231 ********** 2026-04-17 03:26:11.785889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 03:26:11.785908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 03:26:11.785915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:11.785922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 03:26:11.785928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 03:26:11.785939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 03:26:11.785945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:11.785964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 03:26:12.398461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 03:26:12.398588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 03:26:12.398600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:12.398607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 03:26:12.398620 | orchestrator | 2026-04-17 03:26:12.398631 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-17 03:26:12.398664 | orchestrator | Friday 17 April 2026 03:26:11 +0000 (0:00:03.330) 0:00:52.561 ********** 2026-04-17 03:26:12.398674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 03:26:12.398730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 03:26:12.398740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:12.398749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 03:26:12.398758 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:12.398766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 03:26:12.398776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 03:26:12.398787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:12.398792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 03:26:12.398797 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:12.398807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 03:26:20.562209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 03:26:20.562384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:20.562394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 03:26:20.562419 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:20.562425 | orchestrator | 2026-04-17 03:26:20.562431 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-17 03:26:20.562436 | orchestrator | Friday 17 April 2026 03:26:12 +0000 (0:00:00.613) 0:00:53.174 ********** 2026-04-17 03:26:20.562442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-17 03:26:20.562450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-17 03:26:20.562456 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:20.562476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-17 03:26:20.562480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-17 03:26:20.562484 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:20.562488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-17 03:26:20.562492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-17 03:26:20.562496 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:20.562500 | orchestrator | 2026-04-17 03:26:20.562504 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-17 03:26:20.562508 | orchestrator | Friday 17 April 2026 03:26:13 +0000 (0:00:01.178) 0:00:54.353 ********** 2026-04-17 03:26:20.562513 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:26:20.562517 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:26:20.562520 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:26:20.562524 | orchestrator | 2026-04-17 03:26:20.562528 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-17 03:26:20.562533 | orchestrator | Friday 17 April 2026 03:26:14 +0000 (0:00:01.263) 0:00:55.617 ********** 2026-04-17 03:26:20.562537 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:26:20.562541 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:26:20.562545 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:26:20.562549 | orchestrator | 2026-04-17 03:26:20.562553 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-17 03:26:20.562556 | orchestrator | Friday 17 April 2026 03:26:16 +0000 (0:00:01.903) 0:00:57.520 ********** 2026-04-17 03:26:20.562561 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:26:20.562564 | orchestrator | 2026-04-17 03:26:20.562586 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-17 03:26:20.562591 | orchestrator | Friday 17 April 2026 03:26:17 +0000 (0:00:00.595) 0:00:58.116 ********** 2026-04-17 03:26:20.562597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 03:26:20.562609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:20.562618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:20.562622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 03:26:20.562626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:20.562636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:21.186525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 03:26:21.186662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:21.186676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:21.186685 | orchestrator | 2026-04-17 03:26:21.186695 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-17 03:26:21.186703 | orchestrator | Friday 17 April 2026 03:26:20 +0000 (0:00:03.217) 0:01:01.333 ********** 2026-04-17 03:26:21.186712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 03:26:21.186720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:21.186751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:21.186759 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:21.186773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 03:26:21.186780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:21.186788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:21.186796 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:21.186803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 03:26:21.186817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 03:26:30.377731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:30.377816 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:30.377823 | orchestrator | 2026-04-17 03:26:30.377829 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-17 03:26:30.377835 | orchestrator | Friday 17 April 2026 03:26:21 +0000 (0:00:00.623) 0:01:01.957 ********** 2026-04-17 03:26:30.377840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 03:26:30.377857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 03:26:30.377863 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:30.377868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 03:26:30.377872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 03:26:30.377876 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:30.377879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 03:26:30.377883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 03:26:30.377887 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:30.377891 | orchestrator | 2026-04-17 03:26:30.377895 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-17 03:26:30.377898 | orchestrator | Friday 17 April 2026 03:26:22 +0000 (0:00:00.849) 0:01:02.806 ********** 2026-04-17 03:26:30.377902 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:26:30.377906 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:26:30.377910 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:26:30.377914 | orchestrator | 2026-04-17 03:26:30.377918 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-17 03:26:30.377922 | orchestrator | Friday 17 April 2026 03:26:23 +0000 (0:00:01.507) 0:01:04.314 ********** 2026-04-17 03:26:30.377925 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:26:30.377929 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:26:30.377948 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:26:30.377952 | orchestrator | 2026-04-17 03:26:30.377956 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-17 03:26:30.377960 | orchestrator | Friday 17 April 2026 03:26:25 +0000 (0:00:01.918) 0:01:06.232 ********** 2026-04-17 03:26:30.377964 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:30.377968 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:30.377971 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:30.377975 | orchestrator | 2026-04-17 03:26:30.377979 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-17 03:26:30.377983 | orchestrator | Friday 17 April 2026 03:26:25 +0000 (0:00:00.314) 0:01:06.547 ********** 2026-04-17 03:26:30.377986 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:26:30.377990 | orchestrator | 2026-04-17 03:26:30.377994 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-17 03:26:30.377998 | orchestrator | Friday 17 April 2026 03:26:26 +0000 (0:00:00.653) 0:01:07.200 ********** 2026-04-17 03:26:30.378126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 03:26:30.378134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 03:26:30.378142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 03:26:30.378146 | orchestrator | 2026-04-17 03:26:30.378150 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-17 03:26:30.378155 | orchestrator | Friday 17 April 2026 03:26:29 +0000 (0:00:02.615) 0:01:09.816 ********** 2026-04-17 03:26:30.378159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 03:26:30.378168 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:30.378172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 03:26:30.378176 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:30.378184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 03:26:37.700553 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:37.700668 | orchestrator | 2026-04-17 03:26:37.700685 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-17 03:26:37.700699 | orchestrator | Friday 17 April 2026 03:26:30 +0000 (0:00:01.324) 0:01:11.141 ********** 2026-04-17 03:26:37.700729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-17 03:26:37.700744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-17 03:26:37.700756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-17 03:26:37.700792 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:37.700805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-17 03:26:37.700816 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:37.700828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-17 03:26:37.700839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-17 03:26:37.700850 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:37.700861 | orchestrator | 2026-04-17 03:26:37.700872 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-17 03:26:37.700883 | orchestrator | Friday 17 April 2026 03:26:32 +0000 (0:00:01.673) 0:01:12.814 ********** 2026-04-17 03:26:37.700894 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:37.700905 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:37.700915 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:37.700926 | orchestrator | 2026-04-17 03:26:37.700937 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-17 03:26:37.700952 | orchestrator | Friday 17 April 2026 03:26:32 +0000 (0:00:00.410) 0:01:13.225 ********** 2026-04-17 03:26:37.700963 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:37.700974 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:37.700984 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:37.700995 | orchestrator | 2026-04-17 03:26:37.701006 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-17 03:26:37.701017 | orchestrator | Friday 17 April 2026 03:26:33 +0000 (0:00:01.218) 0:01:14.444 ********** 2026-04-17 03:26:37.701028 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:26:37.701038 | orchestrator | 2026-04-17 03:26:37.701049 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-17 03:26:37.701060 | orchestrator | Friday 17 April 2026 03:26:34 +0000 (0:00:00.878) 0:01:15.322 ********** 2026-04-17 03:26:37.701099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 03:26:37.701126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:26:37.701142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 03:26:37.701160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 03:26:37.701181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 03:26:37.701213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 03:26:38.352764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352924 | orchestrator | 2026-04-17 03:26:38.352932 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-17 03:26:38.352943 | orchestrator | Friday 17 April 2026 03:26:37 +0000 (0:00:03.231) 0:01:18.554 ********** 2026-04-17 03:26:38.352951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 03:26:38.352959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 03:26:38.352979 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:38.352987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 03:26:38.353005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.554785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.554931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.554952 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:47.554969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 03:26:47.554981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.555067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.555099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.555109 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:47.555118 | orchestrator | 2026-04-17 03:26:47.555129 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-17 03:26:47.555140 | orchestrator | Friday 17 April 2026 03:26:38 +0000 (0:00:00.674) 0:01:19.229 ********** 2026-04-17 03:26:47.555150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 03:26:47.555162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 03:26:47.555173 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:47.555182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 03:26:47.555191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 03:26:47.555200 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:47.555208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 03:26:47.555217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 03:26:47.555226 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:47.555235 | orchestrator | 2026-04-17 03:26:47.555243 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-17 03:26:47.555252 | orchestrator | Friday 17 April 2026 03:26:39 +0000 (0:00:01.113) 0:01:20.342 ********** 2026-04-17 03:26:47.555263 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:26:47.555272 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:26:47.555282 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:26:47.555301 | orchestrator | 2026-04-17 03:26:47.555332 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-17 03:26:47.555342 | orchestrator | Friday 17 April 2026 03:26:40 +0000 (0:00:01.288) 0:01:21.630 ********** 2026-04-17 03:26:47.555352 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:26:47.555362 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:26:47.555373 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:26:47.555382 | orchestrator | 2026-04-17 03:26:47.555392 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-17 03:26:47.555402 | orchestrator | Friday 17 April 2026 03:26:42 +0000 (0:00:01.941) 0:01:23.572 ********** 2026-04-17 03:26:47.555412 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:47.555422 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:47.555432 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:47.555442 | orchestrator | 2026-04-17 03:26:47.555450 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-17 03:26:47.555459 | orchestrator | Friday 17 April 2026 03:26:43 +0000 (0:00:00.283) 0:01:23.856 ********** 2026-04-17 03:26:47.555468 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:47.555476 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:47.555485 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:47.555493 | orchestrator | 2026-04-17 03:26:47.555502 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-17 03:26:47.555511 | orchestrator | Friday 17 April 2026 03:26:43 +0000 (0:00:00.290) 0:01:24.147 ********** 2026-04-17 03:26:47.555520 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:26:47.555528 | orchestrator | 2026-04-17 03:26:47.555537 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-17 03:26:47.555546 | orchestrator | Friday 17 April 2026 03:26:44 +0000 (0:00:00.976) 0:01:25.124 ********** 2026-04-17 03:26:47.555569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 03:26:47.913900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 03:26:47.914083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 03:26:47.914214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 03:26:47.914223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:47.914246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 03:26:48.512131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 03:26:48.512142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512209 | orchestrator | 2026-04-17 03:26:48.512217 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-17 03:26:48.512231 | orchestrator | Friday 17 April 2026 03:26:47 +0000 (0:00:03.565) 0:01:28.690 ********** 2026-04-17 03:26:48.512238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 03:26:48.512246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 03:26:48.512253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 03:26:48.512279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.066518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.066598 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:49.066609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 03:26:49.066617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 03:26:49.067047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.067090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.067100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.067180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.067194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.067203 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:49.067213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 03:26:49.067221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 03:26:49.067230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.067237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 03:26:49.067263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 03:26:58.665283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:26:58.665475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 03:26:58.665497 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:58.665506 | orchestrator | 2026-04-17 03:26:58.665515 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-17 03:26:58.665524 | orchestrator | Friday 17 April 2026 03:26:49 +0000 (0:00:01.149) 0:01:29.840 ********** 2026-04-17 03:26:58.665532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-17 03:26:58.665541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-17 03:26:58.665549 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:58.665556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-17 03:26:58.665563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-17 03:26:58.665570 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:58.665577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-17 03:26:58.665584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-17 03:26:58.665611 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:58.665618 | orchestrator | 2026-04-17 03:26:58.665625 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-17 03:26:58.665632 | orchestrator | Friday 17 April 2026 03:26:50 +0000 (0:00:01.321) 0:01:31.161 ********** 2026-04-17 03:26:58.665638 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:26:58.665646 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:26:58.665652 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:26:58.665659 | orchestrator | 2026-04-17 03:26:58.665666 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-17 03:26:58.665672 | orchestrator | Friday 17 April 2026 03:26:51 +0000 (0:00:01.259) 0:01:32.420 ********** 2026-04-17 03:26:58.665679 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:26:58.665686 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:26:58.665692 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:26:58.665699 | orchestrator | 2026-04-17 03:26:58.665705 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-17 03:26:58.665715 | orchestrator | Friday 17 April 2026 03:26:53 +0000 (0:00:02.001) 0:01:34.422 ********** 2026-04-17 03:26:58.665729 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:26:58.665746 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:26:58.665757 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:26:58.665767 | orchestrator | 2026-04-17 03:26:58.665778 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-17 03:26:58.665787 | orchestrator | Friday 17 April 2026 03:26:53 +0000 (0:00:00.309) 0:01:34.731 ********** 2026-04-17 03:26:58.665797 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:26:58.665808 | orchestrator | 2026-04-17 03:26:58.665819 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-17 03:26:58.665831 | orchestrator | Friday 17 April 2026 03:26:54 +0000 (0:00:01.006) 0:01:35.738 ********** 2026-04-17 03:26:58.665875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 03:26:58.665892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 03:26:58.665928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 03:27:01.537293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 03:27:01.537505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 03:27:01.537550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 03:27:01.537574 | orchestrator | 2026-04-17 03:27:01.537588 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-17 03:27:01.537600 | orchestrator | Friday 17 April 2026 03:26:58 +0000 (0:00:03.817) 0:01:39.555 ********** 2026-04-17 03:27:01.537614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 03:27:01.537642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 03:27:04.800956 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:04.801036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 03:27:04.801057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 03:27:04.801080 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:04.801098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 03:27:04.801107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 03:27:04.801112 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:04.801125 | orchestrator | 2026-04-17 03:27:04.801130 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-17 03:27:04.801136 | orchestrator | Friday 17 April 2026 03:27:01 +0000 (0:00:02.890) 0:01:42.446 ********** 2026-04-17 03:27:04.801141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 03:27:04.801150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 03:27:12.332456 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:12.332566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 03:27:12.332585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 03:27:12.332598 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:12.332614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 03:27:12.332659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 03:27:12.332689 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:12.332707 | orchestrator | 2026-04-17 03:27:12.332726 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-17 03:27:12.332745 | orchestrator | Friday 17 April 2026 03:27:04 +0000 (0:00:03.131) 0:01:45.578 ********** 2026-04-17 03:27:12.332764 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:12.332811 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:12.332829 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:12.332846 | orchestrator | 2026-04-17 03:27:12.332864 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-17 03:27:12.332882 | orchestrator | Friday 17 April 2026 03:27:05 +0000 (0:00:01.152) 0:01:46.730 ********** 2026-04-17 03:27:12.332900 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:12.332918 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:12.332937 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:12.332955 | orchestrator | 2026-04-17 03:27:12.332973 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-17 03:27:12.332988 | orchestrator | Friday 17 April 2026 03:27:07 +0000 (0:00:01.802) 0:01:48.533 ********** 2026-04-17 03:27:12.332999 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:12.333010 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:12.333020 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:12.333031 | orchestrator | 2026-04-17 03:27:12.333042 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-17 03:27:12.333052 | orchestrator | Friday 17 April 2026 03:27:08 +0000 (0:00:00.274) 0:01:48.807 ********** 2026-04-17 03:27:12.333063 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:27:12.333074 | orchestrator | 2026-04-17 03:27:12.333084 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-17 03:27:12.333095 | orchestrator | Friday 17 April 2026 03:27:08 +0000 (0:00:00.935) 0:01:49.743 ********** 2026-04-17 03:27:12.333127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 03:27:12.333141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 03:27:12.333153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 03:27:12.333165 | orchestrator | 2026-04-17 03:27:12.333176 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-17 03:27:12.333188 | orchestrator | Friday 17 April 2026 03:27:11 +0000 (0:00:02.790) 0:01:52.534 ********** 2026-04-17 03:27:12.333199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 03:27:12.333221 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:12.333233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 03:27:12.333244 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:12.333369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 03:27:12.333402 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:12.333421 | orchestrator | 2026-04-17 03:27:12.333441 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-17 03:27:12.333453 | orchestrator | Friday 17 April 2026 03:27:12 +0000 (0:00:00.376) 0:01:52.910 ********** 2026-04-17 03:27:12.333464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-17 03:27:12.333488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-17 03:27:20.622947 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:20.623051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-17 03:27:20.623065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-17 03:27:20.623074 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:20.623081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-17 03:27:20.623088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-17 03:27:20.623094 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:20.623122 | orchestrator | 2026-04-17 03:27:20.623129 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-17 03:27:20.623137 | orchestrator | Friday 17 April 2026 03:27:12 +0000 (0:00:00.797) 0:01:53.708 ********** 2026-04-17 03:27:20.623144 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:20.623150 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:20.623155 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:20.623161 | orchestrator | 2026-04-17 03:27:20.623167 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-17 03:27:20.623173 | orchestrator | Friday 17 April 2026 03:27:14 +0000 (0:00:01.264) 0:01:54.973 ********** 2026-04-17 03:27:20.623180 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:20.623186 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:20.623193 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:20.623196 | orchestrator | 2026-04-17 03:27:20.623200 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-17 03:27:20.623204 | orchestrator | Friday 17 April 2026 03:27:16 +0000 (0:00:01.984) 0:01:56.958 ********** 2026-04-17 03:27:20.623208 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:20.623211 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:20.623226 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:20.623230 | orchestrator | 2026-04-17 03:27:20.623234 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-17 03:27:20.623238 | orchestrator | Friday 17 April 2026 03:27:16 +0000 (0:00:00.319) 0:01:57.278 ********** 2026-04-17 03:27:20.623241 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:27:20.623245 | orchestrator | 2026-04-17 03:27:20.623249 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-17 03:27:20.623253 | orchestrator | Friday 17 April 2026 03:27:17 +0000 (0:00:01.044) 0:01:58.322 ********** 2026-04-17 03:27:20.623273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 03:27:20.623288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 03:27:20.623298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 03:27:22.280810 | orchestrator | 2026-04-17 03:27:22.280915 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-17 03:27:22.280931 | orchestrator | Friday 17 April 2026 03:27:20 +0000 (0:00:03.077) 0:02:01.399 ********** 2026-04-17 03:27:22.280967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 03:27:22.280984 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:22.281017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 03:27:22.281052 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:22.281071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 03:27:22.281084 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:22.281092 | orchestrator | 2026-04-17 03:27:22.281098 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-17 03:27:22.281105 | orchestrator | Friday 17 April 2026 03:27:21 +0000 (0:00:00.675) 0:02:02.075 ********** 2026-04-17 03:27:22.281112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 03:27:22.281121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 03:27:22.281136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 03:27:22.281149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 03:27:30.521476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 03:27:30.521587 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:30.521599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 03:27:30.521610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 03:27:30.521633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 03:27:30.521641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 03:27:30.521649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 03:27:30.521656 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:30.521662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 03:27:30.521668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 03:27:30.521675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 03:27:30.521700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 03:27:30.521707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 03:27:30.521714 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:30.521720 | orchestrator | 2026-04-17 03:27:30.521727 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-17 03:27:30.521736 | orchestrator | Friday 17 April 2026 03:27:22 +0000 (0:00:00.979) 0:02:03.054 ********** 2026-04-17 03:27:30.521742 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:30.521757 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:30.521763 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:30.521769 | orchestrator | 2026-04-17 03:27:30.521775 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-17 03:27:30.521781 | orchestrator | Friday 17 April 2026 03:27:23 +0000 (0:00:01.517) 0:02:04.572 ********** 2026-04-17 03:27:30.521787 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:30.521794 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:30.521800 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:30.521807 | orchestrator | 2026-04-17 03:27:30.521813 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-17 03:27:30.521819 | orchestrator | Friday 17 April 2026 03:27:25 +0000 (0:00:01.951) 0:02:06.523 ********** 2026-04-17 03:27:30.521825 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:30.521831 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:30.521851 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:30.521857 | orchestrator | 2026-04-17 03:27:30.521863 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-17 03:27:30.521870 | orchestrator | Friday 17 April 2026 03:27:26 +0000 (0:00:00.306) 0:02:06.830 ********** 2026-04-17 03:27:30.521875 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:30.521882 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:30.521888 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:30.521894 | orchestrator | 2026-04-17 03:27:30.521900 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-17 03:27:30.521906 | orchestrator | Friday 17 April 2026 03:27:26 +0000 (0:00:00.303) 0:02:07.133 ********** 2026-04-17 03:27:30.521912 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:27:30.521918 | orchestrator | 2026-04-17 03:27:30.521924 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-17 03:27:30.521930 | orchestrator | Friday 17 April 2026 03:27:27 +0000 (0:00:01.057) 0:02:08.190 ********** 2026-04-17 03:27:30.521943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 03:27:30.521953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 03:27:30.521966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 03:27:30.521973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 03:27:30.521985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 03:27:31.092804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 03:27:31.092906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 03:27:31.092954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 03:27:31.092972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 03:27:31.092990 | orchestrator | 2026-04-17 03:27:31.093007 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-17 03:27:31.093023 | orchestrator | Friday 17 April 2026 03:27:30 +0000 (0:00:03.102) 0:02:11.293 ********** 2026-04-17 03:27:31.093061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 03:27:31.093091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 03:27:31.093108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 03:27:31.093134 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:31.093151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 03:27:31.093169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 03:27:31.093186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 03:27:31.093201 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:31.093237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 03:27:39.915963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 03:27:39.916088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 03:27:39.916100 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:39.916109 | orchestrator | 2026-04-17 03:27:39.916116 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-17 03:27:39.916123 | orchestrator | Friday 17 April 2026 03:27:31 +0000 (0:00:00.569) 0:02:11.862 ********** 2026-04-17 03:27:39.916130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 03:27:39.916140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 03:27:39.916148 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:39.916154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 03:27:39.916160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 03:27:39.916166 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:39.916172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 03:27:39.916178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 03:27:39.916184 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:39.916190 | orchestrator | 2026-04-17 03:27:39.916196 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-17 03:27:39.916201 | orchestrator | Friday 17 April 2026 03:27:32 +0000 (0:00:01.013) 0:02:12.876 ********** 2026-04-17 03:27:39.916207 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:39.916213 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:39.916219 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:39.916224 | orchestrator | 2026-04-17 03:27:39.916230 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-17 03:27:39.916241 | orchestrator | Friday 17 April 2026 03:27:33 +0000 (0:00:01.272) 0:02:14.149 ********** 2026-04-17 03:27:39.916247 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:39.916252 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:39.916258 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:39.916264 | orchestrator | 2026-04-17 03:27:39.916270 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-17 03:27:39.916276 | orchestrator | Friday 17 April 2026 03:27:35 +0000 (0:00:01.965) 0:02:16.114 ********** 2026-04-17 03:27:39.916281 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:39.916287 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:39.916293 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:39.916299 | orchestrator | 2026-04-17 03:27:39.916317 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-17 03:27:39.916396 | orchestrator | Friday 17 April 2026 03:27:35 +0000 (0:00:00.313) 0:02:16.428 ********** 2026-04-17 03:27:39.916407 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:27:39.916413 | orchestrator | 2026-04-17 03:27:39.916419 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-17 03:27:39.916425 | orchestrator | Friday 17 April 2026 03:27:36 +0000 (0:00:01.118) 0:02:17.546 ********** 2026-04-17 03:27:39.916432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 03:27:39.916441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:27:39.916449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 03:27:39.916463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:27:39.916476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 03:27:45.036374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:27:45.036467 | orchestrator | 2026-04-17 03:27:45.036479 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-17 03:27:45.036488 | orchestrator | Friday 17 April 2026 03:27:39 +0000 (0:00:03.137) 0:02:20.684 ********** 2026-04-17 03:27:45.036498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 03:27:45.036549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:27:45.036578 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:45.036585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 03:27:45.036609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:27:45.036618 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:45.036627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 03:27:45.036633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:27:45.036639 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:45.036645 | orchestrator | 2026-04-17 03:27:45.036658 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-17 03:27:45.036664 | orchestrator | Friday 17 April 2026 03:27:40 +0000 (0:00:00.656) 0:02:21.341 ********** 2026-04-17 03:27:45.036671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-17 03:27:45.036678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-17 03:27:45.036685 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:45.036691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-17 03:27:45.036697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-17 03:27:45.036703 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:45.036709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-17 03:27:45.036715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-17 03:27:45.036722 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:27:45.036727 | orchestrator | 2026-04-17 03:27:45.036733 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-17 03:27:45.036740 | orchestrator | Friday 17 April 2026 03:27:41 +0000 (0:00:00.869) 0:02:22.211 ********** 2026-04-17 03:27:45.036750 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:45.036756 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:45.036762 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:45.036768 | orchestrator | 2026-04-17 03:27:45.036774 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-17 03:27:45.036780 | orchestrator | Friday 17 April 2026 03:27:43 +0000 (0:00:01.618) 0:02:23.829 ********** 2026-04-17 03:27:45.036784 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:27:45.036788 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:27:45.036791 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:27:45.036795 | orchestrator | 2026-04-17 03:27:45.036799 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-17 03:27:45.036808 | orchestrator | Friday 17 April 2026 03:27:45 +0000 (0:00:01.981) 0:02:25.810 ********** 2026-04-17 03:27:49.384738 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:27:49.384861 | orchestrator | 2026-04-17 03:27:49.384876 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-17 03:27:49.384886 | orchestrator | Friday 17 April 2026 03:27:46 +0000 (0:00:01.009) 0:02:26.821 ********** 2026-04-17 03:27:49.384898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 03:27:49.384944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:27:49.384956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 03:27:49.384966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 03:27:49.384987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 03:27:49.385013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:27:49.385022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 03:27:49.385037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 03:27:49.385049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:27:49.385063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 03:27:49.385082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 03:27:49.385106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322474 | orchestrator | 2026-04-17 03:27:50.322563 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-17 03:27:50.322575 | orchestrator | Friday 17 April 2026 03:27:49 +0000 (0:00:03.428) 0:02:30.249 ********** 2026-04-17 03:27:50.322588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 03:27:50.322622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 03:27:50.322667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322675 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:27:50.322701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322730 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:27:50.322739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 03:27:50.322747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 03:27:50.322771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 03:28:01.108083 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:01.108225 | orchestrator | 2026-04-17 03:28:01.108245 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-17 03:28:01.108259 | orchestrator | Friday 17 April 2026 03:27:50 +0000 (0:00:00.933) 0:02:31.183 ********** 2026-04-17 03:28:01.108309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-17 03:28:01.108324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-17 03:28:01.108410 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:01.108423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-17 03:28:01.108435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-17 03:28:01.108448 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:01.108459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-17 03:28:01.108471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-17 03:28:01.108482 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:01.108493 | orchestrator | 2026-04-17 03:28:01.108505 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-17 03:28:01.108516 | orchestrator | Friday 17 April 2026 03:27:51 +0000 (0:00:00.850) 0:02:32.034 ********** 2026-04-17 03:28:01.108527 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:01.108538 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:01.108549 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:01.108560 | orchestrator | 2026-04-17 03:28:01.108574 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-17 03:28:01.108587 | orchestrator | Friday 17 April 2026 03:27:52 +0000 (0:00:01.266) 0:02:33.300 ********** 2026-04-17 03:28:01.108600 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:01.108613 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:01.108625 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:01.108638 | orchestrator | 2026-04-17 03:28:01.108650 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-17 03:28:01.108663 | orchestrator | Friday 17 April 2026 03:27:54 +0000 (0:00:02.022) 0:02:35.323 ********** 2026-04-17 03:28:01.108677 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:28:01.108691 | orchestrator | 2026-04-17 03:28:01.108703 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-17 03:28:01.108717 | orchestrator | Friday 17 April 2026 03:27:55 +0000 (0:00:01.287) 0:02:36.611 ********** 2026-04-17 03:28:01.108730 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 03:28:01.108743 | orchestrator | 2026-04-17 03:28:01.108756 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-17 03:28:01.108769 | orchestrator | Friday 17 April 2026 03:27:58 +0000 (0:00:03.049) 0:02:39.660 ********** 2026-04-17 03:28:01.108851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:28:01.108873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 03:28:01.108888 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:01.108904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:28:01.108935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 03:28:01.108958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:28:03.331623 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:03.331701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 03:28:03.331709 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:03.331713 | orchestrator | 2026-04-17 03:28:03.331718 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-17 03:28:03.331723 | orchestrator | Friday 17 April 2026 03:28:01 +0000 (0:00:02.218) 0:02:41.878 ********** 2026-04-17 03:28:03.331743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:28:03.331762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 03:28:03.331767 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:03.331782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:28:03.331796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 03:28:03.331800 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:03.331804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:28:03.331813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 03:28:12.734730 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:12.734833 | orchestrator | 2026-04-17 03:28:12.734842 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-17 03:28:12.734850 | orchestrator | Friday 17 April 2026 03:28:03 +0000 (0:00:02.223) 0:02:44.102 ********** 2026-04-17 03:28:12.734857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 03:28:12.734883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 03:28:12.734889 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:12.734907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 03:28:12.734913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 03:28:12.734918 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:12.734924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 03:28:12.734929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 03:28:12.734934 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:12.734940 | orchestrator | 2026-04-17 03:28:12.734945 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-17 03:28:12.734950 | orchestrator | Friday 17 April 2026 03:28:06 +0000 (0:00:02.703) 0:02:46.805 ********** 2026-04-17 03:28:12.734955 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:12.734972 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:12.734977 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:12.734983 | orchestrator | 2026-04-17 03:28:12.734992 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-17 03:28:12.734998 | orchestrator | Friday 17 April 2026 03:28:08 +0000 (0:00:02.043) 0:02:48.849 ********** 2026-04-17 03:28:12.735003 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:12.735008 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:12.735013 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:12.735018 | orchestrator | 2026-04-17 03:28:12.735023 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-17 03:28:12.735028 | orchestrator | Friday 17 April 2026 03:28:09 +0000 (0:00:01.390) 0:02:50.240 ********** 2026-04-17 03:28:12.735033 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:12.735038 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:12.735043 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:12.735048 | orchestrator | 2026-04-17 03:28:12.735053 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-17 03:28:12.735058 | orchestrator | Friday 17 April 2026 03:28:09 +0000 (0:00:00.324) 0:02:50.565 ********** 2026-04-17 03:28:12.735064 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:28:12.735069 | orchestrator | 2026-04-17 03:28:12.735074 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-17 03:28:12.735079 | orchestrator | Friday 17 April 2026 03:28:11 +0000 (0:00:01.311) 0:02:51.876 ********** 2026-04-17 03:28:12.735088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 03:28:12.735097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 03:28:12.735103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 03:28:12.735108 | orchestrator | 2026-04-17 03:28:12.735114 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-17 03:28:12.735120 | orchestrator | Friday 17 April 2026 03:28:12 +0000 (0:00:01.439) 0:02:53.315 ********** 2026-04-17 03:28:12.735133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 03:28:20.694779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 03:28:20.694877 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:20.694891 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:20.694900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 03:28:20.694909 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:20.694917 | orchestrator | 2026-04-17 03:28:20.694925 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-17 03:28:20.694935 | orchestrator | Friday 17 April 2026 03:28:12 +0000 (0:00:00.397) 0:02:53.713 ********** 2026-04-17 03:28:20.694944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 03:28:20.694955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 03:28:20.694963 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:20.694970 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:20.694979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 03:28:20.695024 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:20.695033 | orchestrator | 2026-04-17 03:28:20.695041 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-17 03:28:20.695069 | orchestrator | Friday 17 April 2026 03:28:13 +0000 (0:00:00.835) 0:02:54.548 ********** 2026-04-17 03:28:20.695077 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:20.695085 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:20.695093 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:20.695100 | orchestrator | 2026-04-17 03:28:20.695108 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-17 03:28:20.695116 | orchestrator | Friday 17 April 2026 03:28:14 +0000 (0:00:00.483) 0:02:55.031 ********** 2026-04-17 03:28:20.695124 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:20.695132 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:20.695140 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:20.695147 | orchestrator | 2026-04-17 03:28:20.695155 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-17 03:28:20.695163 | orchestrator | Friday 17 April 2026 03:28:15 +0000 (0:00:01.227) 0:02:56.259 ********** 2026-04-17 03:28:20.695171 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:20.695179 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:20.695186 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:20.695194 | orchestrator | 2026-04-17 03:28:20.695202 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-17 03:28:20.695210 | orchestrator | Friday 17 April 2026 03:28:15 +0000 (0:00:00.306) 0:02:56.566 ********** 2026-04-17 03:28:20.695218 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:28:20.695225 | orchestrator | 2026-04-17 03:28:20.695234 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-17 03:28:20.695241 | orchestrator | Friday 17 April 2026 03:28:17 +0000 (0:00:01.370) 0:02:57.936 ********** 2026-04-17 03:28:20.695265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 03:28:20.695279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.695289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.695307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.695407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 03:28:20.695429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.804581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:20.804689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:20.804704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.804733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 03:28:20.804744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:20.804754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.804776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.804791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 03:28:20.804810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:20.804819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.804827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.804836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.804851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 03:28:20.964549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 03:28:20.964690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:20.964717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.964733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 03:28:20.964748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.964793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.964822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.964837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 03:28:20.964852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:20.964868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:20.964884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:20.964914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:21.171087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:21.171195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:21.171208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:21.171217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:21.171225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 03:28:21.171233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:21.171291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:21.171301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:21.171308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:21.171316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:21.171325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 03:28:21.171351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 03:28:21.171385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:22.184812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.184910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.184923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 03:28:22.184933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:22.184940 | orchestrator | 2026-04-17 03:28:22.184948 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-17 03:28:22.184956 | orchestrator | Friday 17 April 2026 03:28:21 +0000 (0:00:04.011) 0:03:01.948 ********** 2026-04-17 03:28:22.184995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 03:28:22.185017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.185026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.185033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.185040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 03:28:22.185053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.185064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.185078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.268501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.268590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 03:28:22.268604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:22.268635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.268653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.268671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.268686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 03:28:22.268691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.268701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.268710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 03:28:22.268718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.268728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.349219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 03:28:22.349325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.349403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:22.349480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 03:28:22.349501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.349520 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:22.349568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.349591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.349612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.349643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:22.349655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.349667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.349687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 03:28:22.575942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 03:28:22.576070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.576121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.576135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.576181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.576213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:22.576254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 03:28:22.576274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.576300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:22.576323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:22.576364 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:22.576382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:22.576399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 03:28:22.576425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 03:28:32.837504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 03:28:32.837656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 03:28:32.837701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 03:28:32.837722 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:32.837742 | orchestrator | 2026-04-17 03:28:32.837761 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-17 03:28:32.837781 | orchestrator | Friday 17 April 2026 03:28:22 +0000 (0:00:01.398) 0:03:03.346 ********** 2026-04-17 03:28:32.837792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-17 03:28:32.837804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-17 03:28:32.837815 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:32.837825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-17 03:28:32.837835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-17 03:28:32.837844 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:32.837854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-17 03:28:32.837863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-17 03:28:32.837872 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:32.837893 | orchestrator | 2026-04-17 03:28:32.837903 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-17 03:28:32.837913 | orchestrator | Friday 17 April 2026 03:28:24 +0000 (0:00:02.017) 0:03:05.363 ********** 2026-04-17 03:28:32.837923 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:32.837932 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:32.837960 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:32.837972 | orchestrator | 2026-04-17 03:28:32.837982 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-17 03:28:32.837991 | orchestrator | Friday 17 April 2026 03:28:25 +0000 (0:00:01.280) 0:03:06.644 ********** 2026-04-17 03:28:32.838001 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:32.838010 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:32.838084 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:32.838094 | orchestrator | 2026-04-17 03:28:32.838104 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-17 03:28:32.838113 | orchestrator | Friday 17 April 2026 03:28:27 +0000 (0:00:01.967) 0:03:08.612 ********** 2026-04-17 03:28:32.838123 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:28:32.838132 | orchestrator | 2026-04-17 03:28:32.838142 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-17 03:28:32.838151 | orchestrator | Friday 17 April 2026 03:28:28 +0000 (0:00:01.166) 0:03:09.778 ********** 2026-04-17 03:28:32.838163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 03:28:32.838181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 03:28:32.838192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 03:28:32.838213 | orchestrator | 2026-04-17 03:28:32.838223 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-17 03:28:32.838233 | orchestrator | Friday 17 April 2026 03:28:32 +0000 (0:00:03.343) 0:03:13.122 ********** 2026-04-17 03:28:32.838253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 03:28:42.900619 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:42.900737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 03:28:42.900756 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:42.900782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 03:28:42.900792 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:42.900802 | orchestrator | 2026-04-17 03:28:42.900813 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-17 03:28:42.900824 | orchestrator | Friday 17 April 2026 03:28:32 +0000 (0:00:00.492) 0:03:13.614 ********** 2026-04-17 03:28:42.900834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 03:28:42.900846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 03:28:42.900881 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:42.900892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 03:28:42.900901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 03:28:42.900910 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:42.900919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 03:28:42.900928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 03:28:42.900936 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:42.900944 | orchestrator | 2026-04-17 03:28:42.900953 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-17 03:28:42.900961 | orchestrator | Friday 17 April 2026 03:28:33 +0000 (0:00:00.724) 0:03:14.339 ********** 2026-04-17 03:28:42.900969 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:42.900978 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:42.900986 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:42.900995 | orchestrator | 2026-04-17 03:28:42.901004 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-17 03:28:42.901012 | orchestrator | Friday 17 April 2026 03:28:35 +0000 (0:00:01.798) 0:03:16.137 ********** 2026-04-17 03:28:42.901022 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:42.901030 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:42.901056 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:42.901066 | orchestrator | 2026-04-17 03:28:42.901075 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-17 03:28:42.901084 | orchestrator | Friday 17 April 2026 03:28:37 +0000 (0:00:01.798) 0:03:17.937 ********** 2026-04-17 03:28:42.901092 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:28:42.901101 | orchestrator | 2026-04-17 03:28:42.901109 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-17 03:28:42.901117 | orchestrator | Friday 17 April 2026 03:28:38 +0000 (0:00:01.482) 0:03:19.420 ********** 2026-04-17 03:28:42.901129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 03:28:42.901160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:28:42.901173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 03:28:42.901193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:28:43.892663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:28:43.892741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:28:43.892766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 03:28:43.892789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:28:43.892795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:28:43.892801 | orchestrator | 2026-04-17 03:28:43.892808 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-17 03:28:43.892815 | orchestrator | Friday 17 April 2026 03:28:42 +0000 (0:00:04.255) 0:03:23.675 ********** 2026-04-17 03:28:43.892834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 03:28:43.892841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:28:43.892856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:28:43.892862 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:43.892869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 03:28:43.892879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:28:54.920118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:28:54.920198 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:54.920229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 03:28:54.920251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 03:28:54.920256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 03:28:54.920260 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:54.920264 | orchestrator | 2026-04-17 03:28:54.920269 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-17 03:28:54.920274 | orchestrator | Friday 17 April 2026 03:28:43 +0000 (0:00:00.975) 0:03:24.651 ********** 2026-04-17 03:28:54.920279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920313 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:28:54.920317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920336 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:28:54.920340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 03:28:54.920398 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:28:54.920403 | orchestrator | 2026-04-17 03:28:54.920408 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-17 03:28:54.920411 | orchestrator | Friday 17 April 2026 03:28:45 +0000 (0:00:01.301) 0:03:25.952 ********** 2026-04-17 03:28:54.920415 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:54.920419 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:54.920423 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:54.920427 | orchestrator | 2026-04-17 03:28:54.920430 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-17 03:28:54.920434 | orchestrator | Friday 17 April 2026 03:28:46 +0000 (0:00:01.397) 0:03:27.349 ********** 2026-04-17 03:28:54.920438 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:28:54.920442 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:28:54.920445 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:28:54.920449 | orchestrator | 2026-04-17 03:28:54.920453 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-17 03:28:54.920457 | orchestrator | Friday 17 April 2026 03:28:48 +0000 (0:00:02.006) 0:03:29.356 ********** 2026-04-17 03:28:54.920460 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:28:54.920464 | orchestrator | 2026-04-17 03:28:54.920468 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-17 03:28:54.920472 | orchestrator | Friday 17 April 2026 03:28:50 +0000 (0:00:01.484) 0:03:30.841 ********** 2026-04-17 03:28:54.920476 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-17 03:28:54.920481 | orchestrator | 2026-04-17 03:28:54.920485 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-17 03:28:54.920488 | orchestrator | Friday 17 April 2026 03:28:50 +0000 (0:00:00.807) 0:03:31.648 ********** 2026-04-17 03:28:54.920494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 03:28:54.920507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 03:29:06.329665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 03:29:06.329767 | orchestrator | 2026-04-17 03:29:06.329779 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-17 03:29:06.329789 | orchestrator | Friday 17 April 2026 03:28:54 +0000 (0:00:04.045) 0:03:35.694 ********** 2026-04-17 03:29:06.329797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:06.329805 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:06.329827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:06.329835 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:06.329842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:06.329849 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:06.329856 | orchestrator | 2026-04-17 03:29:06.329863 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-17 03:29:06.329870 | orchestrator | Friday 17 April 2026 03:28:56 +0000 (0:00:01.440) 0:03:37.135 ********** 2026-04-17 03:29:06.329879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 03:29:06.329893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 03:29:06.329901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 03:29:06.329928 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:06.329935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 03:29:06.329942 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:06.329949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 03:29:06.329956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 03:29:06.329977 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:06.329985 | orchestrator | 2026-04-17 03:29:06.329992 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 03:29:06.329998 | orchestrator | Friday 17 April 2026 03:28:57 +0000 (0:00:01.454) 0:03:38.589 ********** 2026-04-17 03:29:06.330005 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:29:06.330012 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:29:06.330062 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:29:06.330069 | orchestrator | 2026-04-17 03:29:06.330076 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 03:29:06.330083 | orchestrator | Friday 17 April 2026 03:29:00 +0000 (0:00:02.481) 0:03:41.071 ********** 2026-04-17 03:29:06.330090 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:29:06.330096 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:29:06.330103 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:29:06.330109 | orchestrator | 2026-04-17 03:29:06.330116 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-17 03:29:06.330123 | orchestrator | Friday 17 April 2026 03:29:03 +0000 (0:00:02.733) 0:03:43.805 ********** 2026-04-17 03:29:06.330130 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-17 03:29:06.330138 | orchestrator | 2026-04-17 03:29:06.330145 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-17 03:29:06.330152 | orchestrator | Friday 17 April 2026 03:29:04 +0000 (0:00:01.064) 0:03:44.870 ********** 2026-04-17 03:29:06.330164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:06.330172 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:06.330179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:06.330186 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:06.330208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:06.330218 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:06.330229 | orchestrator | 2026-04-17 03:29:06.330239 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-17 03:29:06.330257 | orchestrator | Friday 17 April 2026 03:29:05 +0000 (0:00:00.979) 0:03:45.849 ********** 2026-04-17 03:29:06.330270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:06.330282 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:06.330293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:06.330312 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:28.152689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 03:29:28.152838 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:28.152865 | orchestrator | 2026-04-17 03:29:28.152885 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-17 03:29:28.152905 | orchestrator | Friday 17 April 2026 03:29:06 +0000 (0:00:01.251) 0:03:47.101 ********** 2026-04-17 03:29:28.152925 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:28.152944 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:28.152961 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:28.152980 | orchestrator | 2026-04-17 03:29:28.152998 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 03:29:28.153017 | orchestrator | Friday 17 April 2026 03:29:07 +0000 (0:00:01.395) 0:03:48.496 ********** 2026-04-17 03:29:28.153034 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:29:28.153053 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:29:28.153070 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:29:28.153088 | orchestrator | 2026-04-17 03:29:28.153105 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 03:29:28.153122 | orchestrator | Friday 17 April 2026 03:29:10 +0000 (0:00:02.515) 0:03:51.011 ********** 2026-04-17 03:29:28.153140 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:29:28.153157 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:29:28.153173 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:29:28.153226 | orchestrator | 2026-04-17 03:29:28.153245 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-17 03:29:28.153263 | orchestrator | Friday 17 April 2026 03:29:12 +0000 (0:00:02.506) 0:03:53.517 ********** 2026-04-17 03:29:28.153300 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-17 03:29:28.153319 | orchestrator | 2026-04-17 03:29:28.153338 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-17 03:29:28.153356 | orchestrator | Friday 17 April 2026 03:29:13 +0000 (0:00:01.134) 0:03:54.652 ********** 2026-04-17 03:29:28.153375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 03:29:28.153432 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:28.153452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 03:29:28.153470 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:28.153489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 03:29:28.153508 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:28.153527 | orchestrator | 2026-04-17 03:29:28.153544 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-17 03:29:28.153563 | orchestrator | Friday 17 April 2026 03:29:15 +0000 (0:00:01.291) 0:03:55.944 ********** 2026-04-17 03:29:28.153604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 03:29:28.153623 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:28.153640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 03:29:28.153670 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:28.153687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 03:29:28.153704 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:28.153721 | orchestrator | 2026-04-17 03:29:28.153739 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-17 03:29:28.153756 | orchestrator | Friday 17 April 2026 03:29:16 +0000 (0:00:01.274) 0:03:57.218 ********** 2026-04-17 03:29:28.153781 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:28.153798 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:28.153815 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:28.153832 | orchestrator | 2026-04-17 03:29:28.153850 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 03:29:28.153867 | orchestrator | Friday 17 April 2026 03:29:18 +0000 (0:00:01.737) 0:03:58.956 ********** 2026-04-17 03:29:28.153884 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:29:28.153901 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:29:28.153919 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:29:28.153936 | orchestrator | 2026-04-17 03:29:28.153953 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 03:29:28.153971 | orchestrator | Friday 17 April 2026 03:29:20 +0000 (0:00:02.214) 0:04:01.170 ********** 2026-04-17 03:29:28.153988 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:29:28.154005 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:29:28.154099 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:29:28.154117 | orchestrator | 2026-04-17 03:29:28.154134 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-17 03:29:28.154151 | orchestrator | Friday 17 April 2026 03:29:23 +0000 (0:00:03.049) 0:04:04.219 ********** 2026-04-17 03:29:28.154167 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:29:28.154185 | orchestrator | 2026-04-17 03:29:28.154202 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-17 03:29:28.154220 | orchestrator | Friday 17 April 2026 03:29:24 +0000 (0:00:01.267) 0:04:05.487 ********** 2026-04-17 03:29:28.154238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 03:29:28.154257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 03:29:28.154302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 03:29:28.887525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 03:29:28.887626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:29:28.887639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 03:29:28.887647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 03:29:28.887655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 03:29:28.887680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 03:29:28.887702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 03:29:28.887709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:29:28.887716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 03:29:28.887722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 03:29:28.887755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 03:29:28.887771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:29:28.887778 | orchestrator | 2026-04-17 03:29:28.887786 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-17 03:29:28.887793 | orchestrator | Friday 17 April 2026 03:29:28 +0000 (0:00:03.575) 0:04:09.062 ********** 2026-04-17 03:29:28.887807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 03:29:29.036430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 03:29:29.036505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 03:29:29.036512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 03:29:29.036518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:29:29.036538 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:29.036544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 03:29:29.036550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 03:29:29.036567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 03:29:29.036572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 03:29:29.036576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:29:29.036579 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:29.036584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 03:29:29.036591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 03:29:29.036595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 03:29:29.036606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 03:29:40.125766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 03:29:40.125861 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:40.125870 | orchestrator | 2026-04-17 03:29:40.125874 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-17 03:29:40.125880 | orchestrator | Friday 17 April 2026 03:29:29 +0000 (0:00:00.752) 0:04:09.815 ********** 2026-04-17 03:29:40.125885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 03:29:40.125892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 03:29:40.125913 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:40.125917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 03:29:40.125921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 03:29:40.125925 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:40.125929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 03:29:40.125933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 03:29:40.125937 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:40.125940 | orchestrator | 2026-04-17 03:29:40.125944 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-17 03:29:40.125948 | orchestrator | Friday 17 April 2026 03:29:29 +0000 (0:00:00.849) 0:04:10.664 ********** 2026-04-17 03:29:40.125952 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:29:40.125955 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:29:40.125959 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:29:40.125963 | orchestrator | 2026-04-17 03:29:40.125966 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-17 03:29:40.125970 | orchestrator | Friday 17 April 2026 03:29:31 +0000 (0:00:01.733) 0:04:12.398 ********** 2026-04-17 03:29:40.125974 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:29:40.125978 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:29:40.125981 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:29:40.125985 | orchestrator | 2026-04-17 03:29:40.125989 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-17 03:29:40.125993 | orchestrator | Friday 17 April 2026 03:29:33 +0000 (0:00:01.976) 0:04:14.374 ********** 2026-04-17 03:29:40.125997 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:29:40.126001 | orchestrator | 2026-04-17 03:29:40.126007 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-17 03:29:40.126052 | orchestrator | Friday 17 April 2026 03:29:34 +0000 (0:00:01.312) 0:04:15.686 ********** 2026-04-17 03:29:40.126076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:29:40.126103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:29:40.126117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:29:40.126124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:29:40.126132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:29:40.126149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:29:42.050571 | orchestrator | 2026-04-17 03:29:42.050696 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-17 03:29:42.050715 | orchestrator | Friday 17 April 2026 03:29:40 +0000 (0:00:05.209) 0:04:20.896 ********** 2026-04-17 03:29:42.050732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:29:42.050747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:29:42.050760 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:42.050772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:29:42.050805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:29:42.050865 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:42.050877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:29:42.050891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:29:42.050898 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:42.050905 | orchestrator | 2026-04-17 03:29:42.050911 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-17 03:29:42.050918 | orchestrator | Friday 17 April 2026 03:29:41 +0000 (0:00:01.023) 0:04:21.919 ********** 2026-04-17 03:29:42.050925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-17 03:29:42.050934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 03:29:42.050943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 03:29:42.050951 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:42.050958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-17 03:29:42.050976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 03:29:42.050986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 03:29:42.050997 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:42.051011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-17 03:29:42.051026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 03:29:42.051045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 03:29:48.055621 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:48.055698 | orchestrator | 2026-04-17 03:29:48.055706 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-17 03:29:48.055713 | orchestrator | Friday 17 April 2026 03:29:42 +0000 (0:00:00.898) 0:04:22.818 ********** 2026-04-17 03:29:48.055717 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:48.055721 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:48.055725 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:48.055729 | orchestrator | 2026-04-17 03:29:48.055733 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-17 03:29:48.055737 | orchestrator | Friday 17 April 2026 03:29:42 +0000 (0:00:00.436) 0:04:23.254 ********** 2026-04-17 03:29:48.055741 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:48.055745 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:48.055749 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:48.055753 | orchestrator | 2026-04-17 03:29:48.055757 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-17 03:29:48.055760 | orchestrator | Friday 17 April 2026 03:29:44 +0000 (0:00:01.662) 0:04:24.916 ********** 2026-04-17 03:29:48.055765 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:29:48.055769 | orchestrator | 2026-04-17 03:29:48.055773 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-17 03:29:48.055777 | orchestrator | Friday 17 April 2026 03:29:45 +0000 (0:00:01.634) 0:04:26.551 ********** 2026-04-17 03:29:48.055782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 03:29:48.055790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 03:29:48.055812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:48.055826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:48.055831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 03:29:48.055846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 03:29:48.055851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 03:29:48.055855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:48.055859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:48.055868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 03:29:48.055875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 03:29:48.055879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 03:29:48.055887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:49.565720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:49.565816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 03:29:49.565850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 03:29:49.565877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 03:29:49.565886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:49.565895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:49.565920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 03:29:49.565929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 03:29:49.565945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 03:29:49.565959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 03:29:49.565974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 03:29:50.243644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.243797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.243841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.243852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.243882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 03:29:50.243894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 03:29:50.243904 | orchestrator | 2026-04-17 03:29:50.243916 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-17 03:29:50.243927 | orchestrator | Friday 17 April 2026 03:29:49 +0000 (0:00:03.920) 0:04:30.471 ********** 2026-04-17 03:29:50.243935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 03:29:50.243958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 03:29:50.243981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.243993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.244023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 03:29:50.244061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 03:29:50.244073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 03:29:50.244099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 03:29:50.440749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.440849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 03:29:50.440858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.440881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.440888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 03:29:50.440893 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:50.440901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.440907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 03:29:50.440947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 03:29:50.440955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 03:29:50.440965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.440970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:50.440975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 03:29:50.440980 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:50.440986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 03:29:50.441001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 03:29:52.188981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:52.189087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:52.189117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 03:29:52.189130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 03:29:52.189143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 03:29:52.189174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:52.189202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 03:29:52.189212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 03:29:52.189222 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:52.189233 | orchestrator | 2026-04-17 03:29:52.189243 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-17 03:29:52.189253 | orchestrator | Friday 17 April 2026 03:29:50 +0000 (0:00:00.893) 0:04:31.365 ********** 2026-04-17 03:29:52.189263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-17 03:29:52.189279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-17 03:29:52.189291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 03:29:52.189303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 03:29:52.189313 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:52.189323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-17 03:29:52.189338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-17 03:29:52.189347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 03:29:52.189357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 03:29:52.189365 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:52.189374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-17 03:29:52.189383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-17 03:29:52.189392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 03:29:52.189444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 03:29:59.287131 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:59.287231 | orchestrator | 2026-04-17 03:29:59.287244 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-17 03:29:59.287254 | orchestrator | Friday 17 April 2026 03:29:52 +0000 (0:00:01.590) 0:04:32.955 ********** 2026-04-17 03:29:59.287262 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:59.287271 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:59.287279 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:59.287286 | orchestrator | 2026-04-17 03:29:59.287295 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-17 03:29:59.287303 | orchestrator | Friday 17 April 2026 03:29:52 +0000 (0:00:00.439) 0:04:33.395 ********** 2026-04-17 03:29:59.287311 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:59.287318 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:59.287327 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:59.287341 | orchestrator | 2026-04-17 03:29:59.287354 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-17 03:29:59.287366 | orchestrator | Friday 17 April 2026 03:29:53 +0000 (0:00:01.263) 0:04:34.658 ********** 2026-04-17 03:29:59.287378 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:29:59.287392 | orchestrator | 2026-04-17 03:29:59.287458 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-17 03:29:59.287475 | orchestrator | Friday 17 April 2026 03:29:55 +0000 (0:00:01.724) 0:04:36.382 ********** 2026-04-17 03:29:59.287489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:29:59.287534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:29:59.287580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:29:59.287590 | orchestrator | 2026-04-17 03:29:59.287598 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-17 03:29:59.287623 | orchestrator | Friday 17 April 2026 03:29:57 +0000 (0:00:02.067) 0:04:38.450 ********** 2026-04-17 03:29:59.287632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 03:29:59.287641 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:59.287662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 03:29:59.287671 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:59.287681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 03:29:59.287691 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:59.287700 | orchestrator | 2026-04-17 03:29:59.287709 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-17 03:29:59.287718 | orchestrator | Friday 17 April 2026 03:29:58 +0000 (0:00:00.413) 0:04:38.863 ********** 2026-04-17 03:29:59.287728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 03:29:59.287738 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:29:59.287748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 03:29:59.287757 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:29:59.287766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 03:29:59.287777 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:29:59.287790 | orchestrator | 2026-04-17 03:29:59.287804 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-17 03:29:59.287818 | orchestrator | Friday 17 April 2026 03:29:58 +0000 (0:00:00.630) 0:04:39.493 ********** 2026-04-17 03:29:59.287838 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:09.174648 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:09.174804 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:09.174825 | orchestrator | 2026-04-17 03:30:09.174841 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-17 03:30:09.174856 | orchestrator | Friday 17 April 2026 03:29:59 +0000 (0:00:00.818) 0:04:40.311 ********** 2026-04-17 03:30:09.174869 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:09.174881 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:09.174895 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:09.174944 | orchestrator | 2026-04-17 03:30:09.174960 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-17 03:30:09.174975 | orchestrator | Friday 17 April 2026 03:30:00 +0000 (0:00:01.323) 0:04:41.635 ********** 2026-04-17 03:30:09.174989 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:30:09.175003 | orchestrator | 2026-04-17 03:30:09.175016 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-17 03:30:09.175027 | orchestrator | Friday 17 April 2026 03:30:02 +0000 (0:00:01.441) 0:04:43.076 ********** 2026-04-17 03:30:09.175058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 03:30:09.175074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 03:30:09.175085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 03:30:09.175119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 03:30:09.175144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 03:30:09.175155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 03:30:09.175164 | orchestrator | 2026-04-17 03:30:09.175175 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-17 03:30:09.175185 | orchestrator | Friday 17 April 2026 03:30:08 +0000 (0:00:05.845) 0:04:48.922 ********** 2026-04-17 03:30:09.175193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 03:30:09.175210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 03:30:14.851865 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:14.852020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 03:30:14.852047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 03:30:14.852063 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:14.852079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 03:30:14.852094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 03:30:14.852135 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:14.852153 | orchestrator | 2026-04-17 03:30:14.852168 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-17 03:30:14.852184 | orchestrator | Friday 17 April 2026 03:30:09 +0000 (0:00:01.030) 0:04:49.953 ********** 2026-04-17 03:30:14.852216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852282 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:14.852297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852353 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:14.852366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 03:30:14.852444 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:14.852459 | orchestrator | 2026-04-17 03:30:14.852474 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-17 03:30:14.852487 | orchestrator | Friday 17 April 2026 03:30:10 +0000 (0:00:00.946) 0:04:50.899 ********** 2026-04-17 03:30:14.852512 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:30:14.852526 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:30:14.852540 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:30:14.852554 | orchestrator | 2026-04-17 03:30:14.852568 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-17 03:30:14.852581 | orchestrator | Friday 17 April 2026 03:30:11 +0000 (0:00:01.288) 0:04:52.188 ********** 2026-04-17 03:30:14.852596 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:30:14.852610 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:30:14.852624 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:30:14.852638 | orchestrator | 2026-04-17 03:30:14.852652 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-17 03:30:14.852665 | orchestrator | Friday 17 April 2026 03:30:13 +0000 (0:00:02.128) 0:04:54.317 ********** 2026-04-17 03:30:14.852679 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:14.852693 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:14.852706 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:14.852720 | orchestrator | 2026-04-17 03:30:14.852734 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-17 03:30:14.852747 | orchestrator | Friday 17 April 2026 03:30:14 +0000 (0:00:00.666) 0:04:54.983 ********** 2026-04-17 03:30:14.852761 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:14.852774 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:14.852788 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:14.852801 | orchestrator | 2026-04-17 03:30:14.852815 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-17 03:30:14.852828 | orchestrator | Friday 17 April 2026 03:30:14 +0000 (0:00:00.322) 0:04:55.306 ********** 2026-04-17 03:30:14.852841 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:14.852863 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909250 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909345 | orchestrator | 2026-04-17 03:30:57.909357 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-17 03:30:57.909367 | orchestrator | Friday 17 April 2026 03:30:14 +0000 (0:00:00.325) 0:04:55.632 ********** 2026-04-17 03:30:57.909373 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.909380 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909386 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909391 | orchestrator | 2026-04-17 03:30:57.909395 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-17 03:30:57.909399 | orchestrator | Friday 17 April 2026 03:30:15 +0000 (0:00:00.321) 0:04:55.953 ********** 2026-04-17 03:30:57.909403 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.909408 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909412 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909416 | orchestrator | 2026-04-17 03:30:57.909420 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-17 03:30:57.909424 | orchestrator | Friday 17 April 2026 03:30:15 +0000 (0:00:00.580) 0:04:56.534 ********** 2026-04-17 03:30:57.909428 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.909433 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909486 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909492 | orchestrator | 2026-04-17 03:30:57.909496 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-17 03:30:57.909500 | orchestrator | Friday 17 April 2026 03:30:16 +0000 (0:00:00.546) 0:04:57.080 ********** 2026-04-17 03:30:57.909504 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.909509 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.909513 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.909517 | orchestrator | 2026-04-17 03:30:57.909521 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-17 03:30:57.909525 | orchestrator | Friday 17 April 2026 03:30:16 +0000 (0:00:00.651) 0:04:57.732 ********** 2026-04-17 03:30:57.909546 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.909550 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.909554 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.909558 | orchestrator | 2026-04-17 03:30:57.909562 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-17 03:30:57.909566 | orchestrator | Friday 17 April 2026 03:30:17 +0000 (0:00:00.333) 0:04:58.065 ********** 2026-04-17 03:30:57.909570 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.909574 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.909578 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.909582 | orchestrator | 2026-04-17 03:30:57.909586 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-17 03:30:57.909590 | orchestrator | Friday 17 April 2026 03:30:18 +0000 (0:00:01.202) 0:04:59.268 ********** 2026-04-17 03:30:57.909593 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.909597 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.909601 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.909605 | orchestrator | 2026-04-17 03:30:57.909609 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-17 03:30:57.909613 | orchestrator | Friday 17 April 2026 03:30:19 +0000 (0:00:00.855) 0:05:00.124 ********** 2026-04-17 03:30:57.909617 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.909621 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.909625 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.909629 | orchestrator | 2026-04-17 03:30:57.909632 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-17 03:30:57.909636 | orchestrator | Friday 17 April 2026 03:30:20 +0000 (0:00:00.883) 0:05:01.007 ********** 2026-04-17 03:30:57.909640 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:30:57.909645 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:30:57.909648 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:30:57.909652 | orchestrator | 2026-04-17 03:30:57.909656 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-17 03:30:57.909662 | orchestrator | Friday 17 April 2026 03:30:24 +0000 (0:00:04.458) 0:05:05.465 ********** 2026-04-17 03:30:57.909667 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.909673 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.909678 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.909684 | orchestrator | 2026-04-17 03:30:57.909690 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-17 03:30:57.909696 | orchestrator | Friday 17 April 2026 03:30:27 +0000 (0:00:03.200) 0:05:08.666 ********** 2026-04-17 03:30:57.909702 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:30:57.909708 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:30:57.909714 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:30:57.909720 | orchestrator | 2026-04-17 03:30:57.909727 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-17 03:30:57.909733 | orchestrator | Friday 17 April 2026 03:30:43 +0000 (0:00:15.521) 0:05:24.187 ********** 2026-04-17 03:30:57.909740 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.909746 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.909751 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.909757 | orchestrator | 2026-04-17 03:30:57.909763 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-17 03:30:57.909770 | orchestrator | Friday 17 April 2026 03:30:44 +0000 (0:00:00.738) 0:05:24.926 ********** 2026-04-17 03:30:57.909776 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:30:57.909782 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:30:57.909789 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:30:57.909794 | orchestrator | 2026-04-17 03:30:57.909798 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-17 03:30:57.909804 | orchestrator | Friday 17 April 2026 03:30:48 +0000 (0:00:04.234) 0:05:29.160 ********** 2026-04-17 03:30:57.909810 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.909817 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909835 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909841 | orchestrator | 2026-04-17 03:30:57.909849 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-17 03:30:57.909853 | orchestrator | Friday 17 April 2026 03:30:49 +0000 (0:00:00.678) 0:05:29.839 ********** 2026-04-17 03:30:57.909858 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.909862 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909867 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909872 | orchestrator | 2026-04-17 03:30:57.909889 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-17 03:30:57.909894 | orchestrator | Friday 17 April 2026 03:30:49 +0000 (0:00:00.362) 0:05:30.201 ********** 2026-04-17 03:30:57.909899 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.909903 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909908 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909912 | orchestrator | 2026-04-17 03:30:57.909917 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-17 03:30:57.909922 | orchestrator | Friday 17 April 2026 03:30:49 +0000 (0:00:00.357) 0:05:30.559 ********** 2026-04-17 03:30:57.909927 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.909934 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909940 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909947 | orchestrator | 2026-04-17 03:30:57.909954 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-17 03:30:57.909961 | orchestrator | Friday 17 April 2026 03:30:50 +0000 (0:00:00.368) 0:05:30.927 ********** 2026-04-17 03:30:57.909968 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.909974 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.909981 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.909987 | orchestrator | 2026-04-17 03:30:57.909997 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-17 03:30:57.910005 | orchestrator | Friday 17 April 2026 03:30:50 +0000 (0:00:00.714) 0:05:31.642 ********** 2026-04-17 03:30:57.910012 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:30:57.910065 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:30:57.910073 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:30:57.910080 | orchestrator | 2026-04-17 03:30:57.910087 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-17 03:30:57.910094 | orchestrator | Friday 17 April 2026 03:30:51 +0000 (0:00:00.351) 0:05:31.993 ********** 2026-04-17 03:30:57.910102 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.910106 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.910110 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.910115 | orchestrator | 2026-04-17 03:30:57.910119 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-17 03:30:57.910124 | orchestrator | Friday 17 April 2026 03:30:55 +0000 (0:00:04.771) 0:05:36.764 ********** 2026-04-17 03:30:57.910128 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:30:57.910133 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:30:57.910137 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:30:57.910142 | orchestrator | 2026-04-17 03:30:57.910146 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:30:57.910152 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-17 03:30:57.910158 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-17 03:30:57.910163 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-17 03:30:57.910167 | orchestrator | 2026-04-17 03:30:57.910171 | orchestrator | 2026-04-17 03:30:57.910175 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:30:57.910179 | orchestrator | Friday 17 April 2026 03:30:56 +0000 (0:00:00.859) 0:05:37.624 ********** 2026-04-17 03:30:57.910198 | orchestrator | =============================================================================== 2026-04-17 03:30:57.910202 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.52s 2026-04-17 03:30:57.910206 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.85s 2026-04-17 03:30:57.910210 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.21s 2026-04-17 03:30:57.910214 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.77s 2026-04-17 03:30:57.910217 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.46s 2026-04-17 03:30:57.910221 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.26s 2026-04-17 03:30:57.910225 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.23s 2026-04-17 03:30:57.910229 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.05s 2026-04-17 03:30:57.910233 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.01s 2026-04-17 03:30:57.910237 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.92s 2026-04-17 03:30:57.910241 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.82s 2026-04-17 03:30:57.910245 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.58s 2026-04-17 03:30:57.910249 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.57s 2026-04-17 03:30:57.910253 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.43s 2026-04-17 03:30:57.910256 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.34s 2026-04-17 03:30:57.910260 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.33s 2026-04-17 03:30:57.910264 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.23s 2026-04-17 03:30:57.910268 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.22s 2026-04-17 03:30:57.910272 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.20s 2026-04-17 03:30:57.910276 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.20s 2026-04-17 03:31:00.662880 | orchestrator | 2026-04-17 03:31:00 | INFO  | Task 55db4acd-112d-41fa-9141-b4761654c6ea (opensearch) was prepared for execution. 2026-04-17 03:31:00.662952 | orchestrator | 2026-04-17 03:31:00 | INFO  | It takes a moment until task 55db4acd-112d-41fa-9141-b4761654c6ea (opensearch) has been started and output is visible here. 2026-04-17 03:31:10.996203 | orchestrator | 2026-04-17 03:31:10.996302 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:31:10.996315 | orchestrator | 2026-04-17 03:31:10.996324 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 03:31:10.996332 | orchestrator | Friday 17 April 2026 03:31:04 +0000 (0:00:00.255) 0:00:00.255 ********** 2026-04-17 03:31:10.996340 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:31:10.996348 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:31:10.996356 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:31:10.996363 | orchestrator | 2026-04-17 03:31:10.996371 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 03:31:10.996378 | orchestrator | Friday 17 April 2026 03:31:05 +0000 (0:00:00.285) 0:00:00.541 ********** 2026-04-17 03:31:10.996386 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-17 03:31:10.996408 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-17 03:31:10.996416 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-17 03:31:10.996423 | orchestrator | 2026-04-17 03:31:10.996430 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-17 03:31:10.996438 | orchestrator | 2026-04-17 03:31:10.996445 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 03:31:10.996504 | orchestrator | Friday 17 April 2026 03:31:05 +0000 (0:00:00.417) 0:00:00.959 ********** 2026-04-17 03:31:10.996513 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:31:10.996521 | orchestrator | 2026-04-17 03:31:10.996528 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-17 03:31:10.996536 | orchestrator | Friday 17 April 2026 03:31:05 +0000 (0:00:00.470) 0:00:01.429 ********** 2026-04-17 03:31:10.996543 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 03:31:10.996550 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 03:31:10.996557 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 03:31:10.996565 | orchestrator | 2026-04-17 03:31:10.996572 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-17 03:31:10.996580 | orchestrator | Friday 17 April 2026 03:31:06 +0000 (0:00:00.640) 0:00:02.069 ********** 2026-04-17 03:31:10.996590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:10.996601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:10.996623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:10.996638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:10.996653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:10.996661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:10.996669 | orchestrator | 2026-04-17 03:31:10.996676 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 03:31:10.996684 | orchestrator | Friday 17 April 2026 03:31:08 +0000 (0:00:01.588) 0:00:03.658 ********** 2026-04-17 03:31:10.996691 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:31:10.996699 | orchestrator | 2026-04-17 03:31:10.996706 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-17 03:31:10.996713 | orchestrator | Friday 17 April 2026 03:31:08 +0000 (0:00:00.497) 0:00:04.156 ********** 2026-04-17 03:31:10.996727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:11.775202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:11.775314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:11.775331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:11.775364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:11.775434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:11.775448 | orchestrator | 2026-04-17 03:31:11.775514 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-17 03:31:11.775526 | orchestrator | Friday 17 April 2026 03:31:10 +0000 (0:00:02.312) 0:00:06.468 ********** 2026-04-17 03:31:11.775537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:31:11.775548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:31:11.775559 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:31:11.775571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:31:11.775603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:31:12.771601 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:31:12.771702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:31:12.771723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:31:12.771737 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:31:12.771749 | orchestrator | 2026-04-17 03:31:12.771775 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-17 03:31:12.771798 | orchestrator | Friday 17 April 2026 03:31:11 +0000 (0:00:00.782) 0:00:07.251 ********** 2026-04-17 03:31:12.771810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:31:12.771862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:31:12.771894 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:31:12.771907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:31:12.771919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:31:12.771931 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:31:12.771942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 03:31:12.771967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 03:31:12.771979 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:31:12.771990 | orchestrator | 2026-04-17 03:31:12.772004 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-17 03:31:12.772039 | orchestrator | Friday 17 April 2026 03:31:12 +0000 (0:00:00.987) 0:00:08.239 ********** 2026-04-17 03:31:20.640899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:20.641005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:20.641015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:20.641056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:20.641081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:20.641089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:31:20.641096 | orchestrator | 2026-04-17 03:31:20.641104 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-17 03:31:20.641119 | orchestrator | Friday 17 April 2026 03:31:14 +0000 (0:00:02.227) 0:00:10.467 ********** 2026-04-17 03:31:20.641126 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:31:20.641133 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:31:20.641138 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:31:20.641144 | orchestrator | 2026-04-17 03:31:20.641150 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-17 03:31:20.641155 | orchestrator | Friday 17 April 2026 03:31:17 +0000 (0:00:02.356) 0:00:12.823 ********** 2026-04-17 03:31:20.641161 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:31:20.641167 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:31:20.641173 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:31:20.641178 | orchestrator | 2026-04-17 03:31:20.641184 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-17 03:31:20.641190 | orchestrator | Friday 17 April 2026 03:31:19 +0000 (0:00:01.684) 0:00:14.508 ********** 2026-04-17 03:31:20.641198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:20.641209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:31:20.641220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 03:34:01.546397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:34:01.546557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:34:01.546653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 03:34:01.546666 | orchestrator | 2026-04-17 03:34:01.546678 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 03:34:01.546689 | orchestrator | Friday 17 April 2026 03:31:20 +0000 (0:00:01.608) 0:00:16.116 ********** 2026-04-17 03:34:01.546699 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:34:01.546710 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:34:01.546720 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:34:01.546730 | orchestrator | 2026-04-17 03:34:01.546739 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 03:34:01.546750 | orchestrator | Friday 17 April 2026 03:31:20 +0000 (0:00:00.319) 0:00:16.435 ********** 2026-04-17 03:34:01.546760 | orchestrator | 2026-04-17 03:34:01.546770 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 03:34:01.546791 | orchestrator | Friday 17 April 2026 03:31:21 +0000 (0:00:00.082) 0:00:16.518 ********** 2026-04-17 03:34:01.546810 | orchestrator | 2026-04-17 03:34:01.546820 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 03:34:01.546830 | orchestrator | Friday 17 April 2026 03:31:21 +0000 (0:00:00.072) 0:00:16.591 ********** 2026-04-17 03:34:01.546849 | orchestrator | 2026-04-17 03:34:01.546859 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-17 03:34:01.546885 | orchestrator | Friday 17 April 2026 03:31:21 +0000 (0:00:00.064) 0:00:16.655 ********** 2026-04-17 03:34:01.546896 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:34:01.546905 | orchestrator | 2026-04-17 03:34:01.546915 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-17 03:34:01.546926 | orchestrator | Friday 17 April 2026 03:31:21 +0000 (0:00:00.212) 0:00:16.867 ********** 2026-04-17 03:34:01.546937 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:34:01.546949 | orchestrator | 2026-04-17 03:34:01.546959 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-17 03:34:01.546970 | orchestrator | Friday 17 April 2026 03:31:22 +0000 (0:00:00.646) 0:00:17.514 ********** 2026-04-17 03:34:01.546981 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:01.546993 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:34:01.547005 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:34:01.547016 | orchestrator | 2026-04-17 03:34:01.547026 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-17 03:34:01.547037 | orchestrator | Friday 17 April 2026 03:32:29 +0000 (0:01:06.993) 0:01:24.507 ********** 2026-04-17 03:34:01.547048 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:01.547059 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:34:01.547070 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:34:01.547080 | orchestrator | 2026-04-17 03:34:01.547091 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 03:34:01.547102 | orchestrator | Friday 17 April 2026 03:33:51 +0000 (0:01:22.090) 0:02:46.598 ********** 2026-04-17 03:34:01.547114 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:34:01.547125 | orchestrator | 2026-04-17 03:34:01.547136 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-17 03:34:01.547147 | orchestrator | Friday 17 April 2026 03:33:51 +0000 (0:00:00.497) 0:02:47.096 ********** 2026-04-17 03:34:01.547158 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:34:01.547169 | orchestrator | 2026-04-17 03:34:01.547180 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-17 03:34:01.547191 | orchestrator | Friday 17 April 2026 03:33:54 +0000 (0:00:02.756) 0:02:49.852 ********** 2026-04-17 03:34:01.547202 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:34:01.547213 | orchestrator | 2026-04-17 03:34:01.547224 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-17 03:34:01.547235 | orchestrator | Friday 17 April 2026 03:33:56 +0000 (0:00:02.164) 0:02:52.017 ********** 2026-04-17 03:34:01.547247 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:01.547258 | orchestrator | 2026-04-17 03:34:01.547269 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-17 03:34:01.547280 | orchestrator | Friday 17 April 2026 03:33:59 +0000 (0:00:02.591) 0:02:54.608 ********** 2026-04-17 03:34:01.547291 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:01.547302 | orchestrator | 2026-04-17 03:34:01.547312 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:34:01.547323 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 03:34:01.547334 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 03:34:01.547344 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 03:34:01.547354 | orchestrator | 2026-04-17 03:34:01.547363 | orchestrator | 2026-04-17 03:34:01.547373 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:34:01.547388 | orchestrator | Friday 17 April 2026 03:34:01 +0000 (0:00:02.395) 0:02:57.004 ********** 2026-04-17 03:34:01.547403 | orchestrator | =============================================================================== 2026-04-17 03:34:01.547413 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.09s 2026-04-17 03:34:01.547423 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.99s 2026-04-17 03:34:01.547432 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.76s 2026-04-17 03:34:01.547442 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.59s 2026-04-17 03:34:01.547451 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.40s 2026-04-17 03:34:01.547461 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.36s 2026-04-17 03:34:01.547471 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.31s 2026-04-17 03:34:01.547480 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.23s 2026-04-17 03:34:01.547490 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.16s 2026-04-17 03:34:01.547499 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.68s 2026-04-17 03:34:01.547509 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.61s 2026-04-17 03:34:01.547518 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.59s 2026-04-17 03:34:01.547528 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.99s 2026-04-17 03:34:01.547537 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.78s 2026-04-17 03:34:01.547547 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.65s 2026-04-17 03:34:01.547556 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2026-04-17 03:34:01.547593 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-04-17 03:34:01.889948 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-04-17 03:34:01.890066 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-04-17 03:34:01.890075 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-04-17 03:34:04.309258 | orchestrator | 2026-04-17 03:34:04 | INFO  | Task 9b08c831-3dfb-423d-a650-2ef34d78a617 (memcached) was prepared for execution. 2026-04-17 03:34:04.309356 | orchestrator | 2026-04-17 03:34:04 | INFO  | It takes a moment until task 9b08c831-3dfb-423d-a650-2ef34d78a617 (memcached) has been started and output is visible here. 2026-04-17 03:34:16.120923 | orchestrator | 2026-04-17 03:34:16.121029 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:34:16.121048 | orchestrator | 2026-04-17 03:34:16.121061 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 03:34:16.121074 | orchestrator | Friday 17 April 2026 03:34:08 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-04-17 03:34:16.121087 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:34:16.121101 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:34:16.121114 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:34:16.121125 | orchestrator | 2026-04-17 03:34:16.121137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 03:34:16.121150 | orchestrator | Friday 17 April 2026 03:34:08 +0000 (0:00:00.306) 0:00:00.555 ********** 2026-04-17 03:34:16.121164 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-17 03:34:16.121178 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-17 03:34:16.121191 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-17 03:34:16.121204 | orchestrator | 2026-04-17 03:34:16.121212 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-17 03:34:16.121220 | orchestrator | 2026-04-17 03:34:16.121227 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-17 03:34:16.121267 | orchestrator | Friday 17 April 2026 03:34:09 +0000 (0:00:00.400) 0:00:00.955 ********** 2026-04-17 03:34:16.121276 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:34:16.121286 | orchestrator | 2026-04-17 03:34:16.121299 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-17 03:34:16.121315 | orchestrator | Friday 17 April 2026 03:34:09 +0000 (0:00:00.463) 0:00:01.419 ********** 2026-04-17 03:34:16.121332 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-17 03:34:16.121344 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-17 03:34:16.121356 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-17 03:34:16.121367 | orchestrator | 2026-04-17 03:34:16.121379 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-17 03:34:16.121391 | orchestrator | Friday 17 April 2026 03:34:10 +0000 (0:00:00.636) 0:00:02.056 ********** 2026-04-17 03:34:16.121403 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-17 03:34:16.121415 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-17 03:34:16.121427 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-17 03:34:16.121439 | orchestrator | 2026-04-17 03:34:16.121451 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-17 03:34:16.121463 | orchestrator | Friday 17 April 2026 03:34:12 +0000 (0:00:01.827) 0:00:03.884 ********** 2026-04-17 03:34:16.121475 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:34:16.121489 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:34:16.121502 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:16.121515 | orchestrator | 2026-04-17 03:34:16.121546 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-17 03:34:16.121559 | orchestrator | Friday 17 April 2026 03:34:13 +0000 (0:00:01.471) 0:00:05.355 ********** 2026-04-17 03:34:16.121596 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:16.121608 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:34:16.121620 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:34:16.121631 | orchestrator | 2026-04-17 03:34:16.121643 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:34:16.121656 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:34:16.121669 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:34:16.121680 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:34:16.121692 | orchestrator | 2026-04-17 03:34:16.121703 | orchestrator | 2026-04-17 03:34:16.121714 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:34:16.121726 | orchestrator | Friday 17 April 2026 03:34:15 +0000 (0:00:02.151) 0:00:07.506 ********** 2026-04-17 03:34:16.121737 | orchestrator | =============================================================================== 2026-04-17 03:34:16.121748 | orchestrator | memcached : Restart memcached container --------------------------------- 2.15s 2026-04-17 03:34:16.121760 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.83s 2026-04-17 03:34:16.121772 | orchestrator | memcached : Check memcached container ----------------------------------- 1.47s 2026-04-17 03:34:16.121784 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.64s 2026-04-17 03:34:16.121795 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.46s 2026-04-17 03:34:16.121807 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-04-17 03:34:16.121818 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-17 03:34:18.524443 | orchestrator | 2026-04-17 03:34:18 | INFO  | Task 1d35777d-10ea-4b34-aae4-3e8be7bfa46e (redis) was prepared for execution. 2026-04-17 03:34:18.524551 | orchestrator | 2026-04-17 03:34:18 | INFO  | It takes a moment until task 1d35777d-10ea-4b34-aae4-3e8be7bfa46e (redis) has been started and output is visible here. 2026-04-17 03:34:27.347311 | orchestrator | 2026-04-17 03:34:27.347431 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:34:27.347444 | orchestrator | 2026-04-17 03:34:27.347453 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 03:34:27.347462 | orchestrator | Friday 17 April 2026 03:34:22 +0000 (0:00:00.247) 0:00:00.247 ********** 2026-04-17 03:34:27.347470 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:34:27.347479 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:34:27.347488 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:34:27.347496 | orchestrator | 2026-04-17 03:34:27.347504 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 03:34:27.347512 | orchestrator | Friday 17 April 2026 03:34:22 +0000 (0:00:00.311) 0:00:00.558 ********** 2026-04-17 03:34:27.347520 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-17 03:34:27.347529 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-17 03:34:27.347537 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-17 03:34:27.347545 | orchestrator | 2026-04-17 03:34:27.347553 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-17 03:34:27.347561 | orchestrator | 2026-04-17 03:34:27.347568 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-17 03:34:27.347614 | orchestrator | Friday 17 April 2026 03:34:23 +0000 (0:00:00.444) 0:00:01.002 ********** 2026-04-17 03:34:27.347634 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:34:27.347651 | orchestrator | 2026-04-17 03:34:27.347665 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-17 03:34:27.347678 | orchestrator | Friday 17 April 2026 03:34:23 +0000 (0:00:00.487) 0:00:01.490 ********** 2026-04-17 03:34:27.347696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347834 | orchestrator | 2026-04-17 03:34:27.347843 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-17 03:34:27.347851 | orchestrator | Friday 17 April 2026 03:34:24 +0000 (0:00:01.051) 0:00:02.542 ********** 2026-04-17 03:34:27.347859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:27.347997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387486 | orchestrator | 2026-04-17 03:34:31.387505 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-17 03:34:31.387519 | orchestrator | Friday 17 April 2026 03:34:27 +0000 (0:00:02.419) 0:00:04.961 ********** 2026-04-17 03:34:31.387532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387719 | orchestrator | 2026-04-17 03:34:31.387730 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-17 03:34:31.387742 | orchestrator | Friday 17 April 2026 03:34:29 +0000 (0:00:02.365) 0:00:07.327 ********** 2026-04-17 03:34:31.387753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:31.387835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 03:34:42.347569 | orchestrator | 2026-04-17 03:34:42.347765 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 03:34:42.347780 | orchestrator | Friday 17 April 2026 03:34:31 +0000 (0:00:01.480) 0:00:08.807 ********** 2026-04-17 03:34:42.347789 | orchestrator | 2026-04-17 03:34:42.347797 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 03:34:42.347805 | orchestrator | Friday 17 April 2026 03:34:31 +0000 (0:00:00.064) 0:00:08.872 ********** 2026-04-17 03:34:42.347809 | orchestrator | 2026-04-17 03:34:42.347814 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 03:34:42.347819 | orchestrator | Friday 17 April 2026 03:34:31 +0000 (0:00:00.063) 0:00:08.935 ********** 2026-04-17 03:34:42.347824 | orchestrator | 2026-04-17 03:34:42.347829 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-17 03:34:42.347833 | orchestrator | Friday 17 April 2026 03:34:31 +0000 (0:00:00.062) 0:00:08.998 ********** 2026-04-17 03:34:42.347838 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:42.347844 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:34:42.347849 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:34:42.347853 | orchestrator | 2026-04-17 03:34:42.347858 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-17 03:34:42.347862 | orchestrator | Friday 17 April 2026 03:34:38 +0000 (0:00:07.567) 0:00:16.565 ********** 2026-04-17 03:34:42.347867 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:42.347872 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:34:42.347897 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:34:42.347902 | orchestrator | 2026-04-17 03:34:42.347915 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:34:42.347920 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:34:42.347926 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:34:42.347931 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:34:42.347936 | orchestrator | 2026-04-17 03:34:42.347940 | orchestrator | 2026-04-17 03:34:42.347956 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:34:42.347961 | orchestrator | Friday 17 April 2026 03:34:42 +0000 (0:00:03.062) 0:00:19.628 ********** 2026-04-17 03:34:42.347965 | orchestrator | =============================================================================== 2026-04-17 03:34:42.347970 | orchestrator | redis : Restart redis container ----------------------------------------- 7.57s 2026-04-17 03:34:42.347974 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.06s 2026-04-17 03:34:42.347979 | orchestrator | redis : Copying over default config.json files -------------------------- 2.42s 2026-04-17 03:34:42.347983 | orchestrator | redis : Copying over redis config files --------------------------------- 2.37s 2026-04-17 03:34:42.347988 | orchestrator | redis : Check redis containers ------------------------------------------ 1.48s 2026-04-17 03:34:42.347992 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.05s 2026-04-17 03:34:42.347997 | orchestrator | redis : include_tasks --------------------------------------------------- 0.49s 2026-04-17 03:34:42.348001 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-04-17 03:34:42.348006 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-17 03:34:42.348010 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2026-04-17 03:34:44.703859 | orchestrator | 2026-04-17 03:34:44 | INFO  | Task 55d42726-10c8-4087-a37f-73390739721d (mariadb) was prepared for execution. 2026-04-17 03:34:44.703979 | orchestrator | 2026-04-17 03:34:44 | INFO  | It takes a moment until task 55d42726-10c8-4087-a37f-73390739721d (mariadb) has been started and output is visible here. 2026-04-17 03:34:57.557427 | orchestrator | 2026-04-17 03:34:57.557528 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:34:57.557541 | orchestrator | 2026-04-17 03:34:57.557550 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 03:34:57.557559 | orchestrator | Friday 17 April 2026 03:34:48 +0000 (0:00:00.180) 0:00:00.180 ********** 2026-04-17 03:34:57.557567 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:34:57.557578 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:34:57.557586 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:34:57.557664 | orchestrator | 2026-04-17 03:34:57.557675 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 03:34:57.557684 | orchestrator | Friday 17 April 2026 03:34:49 +0000 (0:00:00.309) 0:00:00.489 ********** 2026-04-17 03:34:57.557692 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-17 03:34:57.557701 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-17 03:34:57.557708 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-17 03:34:57.557716 | orchestrator | 2026-04-17 03:34:57.557724 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-17 03:34:57.557735 | orchestrator | 2026-04-17 03:34:57.557744 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-17 03:34:57.557752 | orchestrator | Friday 17 April 2026 03:34:49 +0000 (0:00:00.561) 0:00:01.050 ********** 2026-04-17 03:34:57.557786 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 03:34:57.557794 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 03:34:57.557802 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 03:34:57.557810 | orchestrator | 2026-04-17 03:34:57.557818 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 03:34:57.557825 | orchestrator | Friday 17 April 2026 03:34:50 +0000 (0:00:00.388) 0:00:01.438 ********** 2026-04-17 03:34:57.557834 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:34:57.557840 | orchestrator | 2026-04-17 03:34:57.557845 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-17 03:34:57.557850 | orchestrator | Friday 17 April 2026 03:34:50 +0000 (0:00:00.561) 0:00:02.000 ********** 2026-04-17 03:34:57.557871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:34:57.557901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:34:57.557925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:34:57.557934 | orchestrator | 2026-04-17 03:34:57.557941 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-17 03:34:57.557949 | orchestrator | Friday 17 April 2026 03:34:53 +0000 (0:00:02.375) 0:00:04.376 ********** 2026-04-17 03:34:57.557956 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:34:57.557964 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:57.557971 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:34:57.557979 | orchestrator | 2026-04-17 03:34:57.557986 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-17 03:34:57.557994 | orchestrator | Friday 17 April 2026 03:34:53 +0000 (0:00:00.502) 0:00:04.878 ********** 2026-04-17 03:34:57.558002 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:34:57.558010 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:34:57.558068 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:34:57.558076 | orchestrator | 2026-04-17 03:34:57.558084 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-17 03:34:57.558091 | orchestrator | Friday 17 April 2026 03:34:54 +0000 (0:00:01.216) 0:00:06.094 ********** 2026-04-17 03:34:57.558109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:35:04.323588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:35:04.323737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:35:04.323768 | orchestrator | 2026-04-17 03:35:04.323775 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-17 03:35:04.323784 | orchestrator | Friday 17 April 2026 03:34:57 +0000 (0:00:02.699) 0:00:08.794 ********** 2026-04-17 03:35:04.323790 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:35:04.323797 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:35:04.323805 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:35:04.323811 | orchestrator | 2026-04-17 03:35:04.323825 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-17 03:35:04.323847 | orchestrator | Friday 17 April 2026 03:34:58 +0000 (0:00:00.927) 0:00:09.722 ********** 2026-04-17 03:35:04.323854 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:35:04.323861 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:35:04.323867 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:35:04.323874 | orchestrator | 2026-04-17 03:35:04.323882 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 03:35:04.323888 | orchestrator | Friday 17 April 2026 03:35:01 +0000 (0:00:03.357) 0:00:13.079 ********** 2026-04-17 03:35:04.323896 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:35:04.323902 | orchestrator | 2026-04-17 03:35:04.323910 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-17 03:35:04.323916 | orchestrator | Friday 17 April 2026 03:35:02 +0000 (0:00:00.476) 0:00:13.555 ********** 2026-04-17 03:35:04.323929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:04.323942 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:35:04.323956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:09.301374 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:35:09.301520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:09.301568 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:35:09.301578 | orchestrator | 2026-04-17 03:35:09.301587 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-17 03:35:09.301597 | orchestrator | Friday 17 April 2026 03:35:04 +0000 (0:00:02.003) 0:00:15.558 ********** 2026-04-17 03:35:09.301668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:09.301683 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:35:09.301726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:09.301755 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:35:09.301772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:09.301787 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:35:09.301800 | orchestrator | 2026-04-17 03:35:09.301812 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-17 03:35:09.301820 | orchestrator | Friday 17 April 2026 03:35:06 +0000 (0:00:02.526) 0:00:18.085 ********** 2026-04-17 03:35:09.301843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:12.119723 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:35:12.119812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:12.119824 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:35:12.119831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 03:35:12.119850 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:35:12.119856 | orchestrator | 2026-04-17 03:35:12.119862 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-17 03:35:12.119887 | orchestrator | Friday 17 April 2026 03:35:09 +0000 (0:00:02.457) 0:00:20.542 ********** 2026-04-17 03:35:12.119907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:35:12.119914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:35:12.119930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 03:37:22.074437 | orchestrator | 2026-04-17 03:37:22.074566 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-17 03:37:22.074580 | orchestrator | Friday 17 April 2026 03:35:12 +0000 (0:00:02.813) 0:00:23.355 ********** 2026-04-17 03:37:22.074587 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:22.074595 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:37:22.074601 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:37:22.074607 | orchestrator | 2026-04-17 03:37:22.074614 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-17 03:37:22.074620 | orchestrator | Friday 17 April 2026 03:35:12 +0000 (0:00:00.804) 0:00:24.160 ********** 2026-04-17 03:37:22.074626 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.074633 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:37:22.074639 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:37:22.074645 | orchestrator | 2026-04-17 03:37:22.074651 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-17 03:37:22.074657 | orchestrator | Friday 17 April 2026 03:35:13 +0000 (0:00:00.532) 0:00:24.693 ********** 2026-04-17 03:37:22.074663 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.074668 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:37:22.074674 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:37:22.074680 | orchestrator | 2026-04-17 03:37:22.074686 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-17 03:37:22.074714 | orchestrator | Friday 17 April 2026 03:35:13 +0000 (0:00:00.334) 0:00:25.027 ********** 2026-04-17 03:37:22.074721 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-17 03:37:22.074728 | orchestrator | ...ignoring 2026-04-17 03:37:22.074735 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-17 03:37:22.074741 | orchestrator | ...ignoring 2026-04-17 03:37:22.074747 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-17 03:37:22.074753 | orchestrator | ...ignoring 2026-04-17 03:37:22.074759 | orchestrator | 2026-04-17 03:37:22.074765 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-17 03:37:22.074800 | orchestrator | Friday 17 April 2026 03:35:24 +0000 (0:00:10.844) 0:00:35.872 ********** 2026-04-17 03:37:22.074807 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.074813 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:37:22.074819 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:37:22.074825 | orchestrator | 2026-04-17 03:37:22.074831 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-17 03:37:22.074837 | orchestrator | Friday 17 April 2026 03:35:25 +0000 (0:00:00.418) 0:00:36.290 ********** 2026-04-17 03:37:22.074843 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:22.074849 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:22.074855 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:22.074861 | orchestrator | 2026-04-17 03:37:22.074867 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-17 03:37:22.074873 | orchestrator | Friday 17 April 2026 03:35:25 +0000 (0:00:00.658) 0:00:36.949 ********** 2026-04-17 03:37:22.074878 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:22.074884 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:22.074890 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:22.074896 | orchestrator | 2026-04-17 03:37:22.074902 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-17 03:37:22.074908 | orchestrator | Friday 17 April 2026 03:35:26 +0000 (0:00:00.427) 0:00:37.376 ********** 2026-04-17 03:37:22.074926 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:22.074932 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:22.074938 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:22.074945 | orchestrator | 2026-04-17 03:37:22.074952 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-17 03:37:22.074959 | orchestrator | Friday 17 April 2026 03:35:26 +0000 (0:00:00.457) 0:00:37.833 ********** 2026-04-17 03:37:22.074966 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.074974 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:37:22.074984 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:37:22.075010 | orchestrator | 2026-04-17 03:37:22.075023 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-17 03:37:22.075033 | orchestrator | Friday 17 April 2026 03:35:26 +0000 (0:00:00.400) 0:00:38.234 ********** 2026-04-17 03:37:22.075044 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:22.075055 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:22.075065 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:22.075075 | orchestrator | 2026-04-17 03:37:22.075085 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 03:37:22.075095 | orchestrator | Friday 17 April 2026 03:35:27 +0000 (0:00:00.585) 0:00:38.820 ********** 2026-04-17 03:37:22.075113 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:22.075126 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:22.075134 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-17 03:37:22.075140 | orchestrator | 2026-04-17 03:37:22.075147 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-17 03:37:22.075154 | orchestrator | Friday 17 April 2026 03:35:27 +0000 (0:00:00.398) 0:00:39.219 ********** 2026-04-17 03:37:22.075161 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:22.075168 | orchestrator | 2026-04-17 03:37:22.075174 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-17 03:37:22.075181 | orchestrator | Friday 17 April 2026 03:35:37 +0000 (0:00:09.954) 0:00:49.173 ********** 2026-04-17 03:37:22.075189 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.075199 | orchestrator | 2026-04-17 03:37:22.075214 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 03:37:22.075225 | orchestrator | Friday 17 April 2026 03:35:38 +0000 (0:00:00.126) 0:00:49.299 ********** 2026-04-17 03:37:22.075248 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:22.075277 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:22.075289 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:22.075311 | orchestrator | 2026-04-17 03:37:22.075320 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-17 03:37:22.075326 | orchestrator | Friday 17 April 2026 03:35:38 +0000 (0:00:00.926) 0:00:50.226 ********** 2026-04-17 03:37:22.075331 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:22.075337 | orchestrator | 2026-04-17 03:37:22.075343 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-17 03:37:22.075349 | orchestrator | Friday 17 April 2026 03:35:46 +0000 (0:00:07.489) 0:00:57.716 ********** 2026-04-17 03:37:22.075354 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.075360 | orchestrator | 2026-04-17 03:37:22.075366 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-17 03:37:22.075371 | orchestrator | Friday 17 April 2026 03:35:48 +0000 (0:00:01.562) 0:00:59.279 ********** 2026-04-17 03:37:22.075377 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.075383 | orchestrator | 2026-04-17 03:37:22.075389 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-17 03:37:22.075395 | orchestrator | Friday 17 April 2026 03:35:50 +0000 (0:00:02.532) 0:01:01.811 ********** 2026-04-17 03:37:22.075400 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:22.075406 | orchestrator | 2026-04-17 03:37:22.075412 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-17 03:37:22.075418 | orchestrator | Friday 17 April 2026 03:35:50 +0000 (0:00:00.126) 0:01:01.938 ********** 2026-04-17 03:37:22.075423 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:22.075429 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:22.075435 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:22.075440 | orchestrator | 2026-04-17 03:37:22.075446 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-17 03:37:22.075452 | orchestrator | Friday 17 April 2026 03:35:51 +0000 (0:00:00.339) 0:01:02.278 ********** 2026-04-17 03:37:22.075458 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:22.075464 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-17 03:37:22.075469 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:37:22.075486 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:37:22.075492 | orchestrator | 2026-04-17 03:37:22.075497 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-17 03:37:22.075503 | orchestrator | skipping: no hosts matched 2026-04-17 03:37:22.075509 | orchestrator | 2026-04-17 03:37:22.075515 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-17 03:37:22.075525 | orchestrator | 2026-04-17 03:37:22.075535 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 03:37:22.075545 | orchestrator | Friday 17 April 2026 03:35:51 +0000 (0:00:00.536) 0:01:02.814 ********** 2026-04-17 03:37:22.075554 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:37:22.075564 | orchestrator | 2026-04-17 03:37:22.075572 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 03:37:22.075581 | orchestrator | Friday 17 April 2026 03:36:09 +0000 (0:00:18.197) 0:01:21.012 ********** 2026-04-17 03:37:22.075591 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:37:22.075599 | orchestrator | 2026-04-17 03:37:22.075609 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 03:37:22.075618 | orchestrator | Friday 17 April 2026 03:36:26 +0000 (0:00:16.571) 0:01:37.584 ********** 2026-04-17 03:37:22.075627 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:37:22.075638 | orchestrator | 2026-04-17 03:37:22.075649 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-17 03:37:22.075662 | orchestrator | 2026-04-17 03:37:22.075673 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 03:37:22.075717 | orchestrator | Friday 17 April 2026 03:36:28 +0000 (0:00:02.342) 0:01:39.926 ********** 2026-04-17 03:37:22.075729 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:37:22.075750 | orchestrator | 2026-04-17 03:37:22.075766 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 03:37:22.075785 | orchestrator | Friday 17 April 2026 03:36:45 +0000 (0:00:17.181) 0:01:57.108 ********** 2026-04-17 03:37:22.075794 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:37:22.075803 | orchestrator | 2026-04-17 03:37:22.075812 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 03:37:22.075820 | orchestrator | Friday 17 April 2026 03:37:01 +0000 (0:00:15.572) 0:02:12.681 ********** 2026-04-17 03:37:22.075828 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:37:22.075837 | orchestrator | 2026-04-17 03:37:22.075847 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-17 03:37:22.075856 | orchestrator | 2026-04-17 03:37:22.075865 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 03:37:22.075875 | orchestrator | Friday 17 April 2026 03:37:03 +0000 (0:00:02.390) 0:02:15.071 ********** 2026-04-17 03:37:22.075885 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:22.075894 | orchestrator | 2026-04-17 03:37:22.075903 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 03:37:22.075913 | orchestrator | Friday 17 April 2026 03:37:14 +0000 (0:00:10.537) 0:02:25.609 ********** 2026-04-17 03:37:22.075919 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.075925 | orchestrator | 2026-04-17 03:37:22.075930 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 03:37:22.075936 | orchestrator | Friday 17 April 2026 03:37:18 +0000 (0:00:04.524) 0:02:30.133 ********** 2026-04-17 03:37:22.075942 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:22.075947 | orchestrator | 2026-04-17 03:37:22.075953 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-17 03:37:22.075959 | orchestrator | 2026-04-17 03:37:22.075965 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-17 03:37:22.075970 | orchestrator | Friday 17 April 2026 03:37:21 +0000 (0:00:02.505) 0:02:32.639 ********** 2026-04-17 03:37:22.075988 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:37:22.075994 | orchestrator | 2026-04-17 03:37:22.076000 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-17 03:37:22.076015 | orchestrator | Friday 17 April 2026 03:37:22 +0000 (0:00:00.671) 0:02:33.310 ********** 2026-04-17 03:37:34.282993 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:34.283106 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:34.283136 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:34.283152 | orchestrator | 2026-04-17 03:37:34.283160 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-17 03:37:34.283168 | orchestrator | Friday 17 April 2026 03:37:24 +0000 (0:00:02.199) 0:02:35.510 ********** 2026-04-17 03:37:34.283175 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:34.283182 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:34.283189 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:34.283196 | orchestrator | 2026-04-17 03:37:34.283203 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-17 03:37:34.283209 | orchestrator | Friday 17 April 2026 03:37:26 +0000 (0:00:02.053) 0:02:37.564 ********** 2026-04-17 03:37:34.283216 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:34.283223 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:34.283229 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:34.283236 | orchestrator | 2026-04-17 03:37:34.283243 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-17 03:37:34.283250 | orchestrator | Friday 17 April 2026 03:37:28 +0000 (0:00:02.264) 0:02:39.828 ********** 2026-04-17 03:37:34.283256 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:34.283263 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:34.283269 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:37:34.283276 | orchestrator | 2026-04-17 03:37:34.283282 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-17 03:37:34.283289 | orchestrator | Friday 17 April 2026 03:37:30 +0000 (0:00:02.188) 0:02:42.017 ********** 2026-04-17 03:37:34.283319 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:34.283328 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:37:34.283334 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:37:34.283340 | orchestrator | 2026-04-17 03:37:34.283347 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-17 03:37:34.283354 | orchestrator | Friday 17 April 2026 03:37:33 +0000 (0:00:02.698) 0:02:44.715 ********** 2026-04-17 03:37:34.283360 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:34.283367 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:37:34.283373 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:37:34.283380 | orchestrator | 2026-04-17 03:37:34.283386 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:37:34.283394 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-17 03:37:34.283402 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-17 03:37:34.283408 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-17 03:37:34.283415 | orchestrator | 2026-04-17 03:37:34.283422 | orchestrator | 2026-04-17 03:37:34.283428 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:37:34.283435 | orchestrator | Friday 17 April 2026 03:37:33 +0000 (0:00:00.478) 0:02:45.193 ********** 2026-04-17 03:37:34.283441 | orchestrator | =============================================================================== 2026-04-17 03:37:34.283448 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.38s 2026-04-17 03:37:34.283454 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.14s 2026-04-17 03:37:34.283569 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.84s 2026-04-17 03:37:34.283589 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.54s 2026-04-17 03:37:34.283600 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.95s 2026-04-17 03:37:34.283611 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.49s 2026-04-17 03:37:34.283622 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.73s 2026-04-17 03:37:34.283632 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.52s 2026-04-17 03:37:34.283643 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.36s 2026-04-17 03:37:34.283654 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.81s 2026-04-17 03:37:34.283665 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.70s 2026-04-17 03:37:34.283676 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.70s 2026-04-17 03:37:34.283687 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.53s 2026-04-17 03:37:34.283801 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.53s 2026-04-17 03:37:34.283821 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.51s 2026-04-17 03:37:34.283828 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.46s 2026-04-17 03:37:34.283835 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.38s 2026-04-17 03:37:34.283842 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.26s 2026-04-17 03:37:34.283848 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.20s 2026-04-17 03:37:34.283855 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.19s 2026-04-17 03:37:36.612896 | orchestrator | 2026-04-17 03:37:36 | INFO  | Task c9e7a7fd-be9f-4430-98fe-5d3970e048a2 (rabbitmq) was prepared for execution. 2026-04-17 03:37:36.613030 | orchestrator | 2026-04-17 03:37:36 | INFO  | It takes a moment until task c9e7a7fd-be9f-4430-98fe-5d3970e048a2 (rabbitmq) has been started and output is visible here. 2026-04-17 03:37:49.586795 | orchestrator | 2026-04-17 03:37:49.586900 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:37:49.586934 | orchestrator | 2026-04-17 03:37:49.586941 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 03:37:49.586948 | orchestrator | Friday 17 April 2026 03:37:40 +0000 (0:00:00.169) 0:00:00.169 ********** 2026-04-17 03:37:49.586953 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:49.586960 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:37:49.586965 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:37:49.586970 | orchestrator | 2026-04-17 03:37:49.586975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 03:37:49.586981 | orchestrator | Friday 17 April 2026 03:37:41 +0000 (0:00:00.296) 0:00:00.465 ********** 2026-04-17 03:37:49.586986 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-17 03:37:49.586992 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-17 03:37:49.586997 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-17 03:37:49.587002 | orchestrator | 2026-04-17 03:37:49.587007 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-17 03:37:49.587013 | orchestrator | 2026-04-17 03:37:49.587019 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 03:37:49.587024 | orchestrator | Friday 17 April 2026 03:37:41 +0000 (0:00:00.529) 0:00:00.995 ********** 2026-04-17 03:37:49.587030 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:37:49.587036 | orchestrator | 2026-04-17 03:37:49.587041 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-17 03:37:49.587046 | orchestrator | Friday 17 April 2026 03:37:42 +0000 (0:00:00.525) 0:00:01.520 ********** 2026-04-17 03:37:49.587051 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:49.587056 | orchestrator | 2026-04-17 03:37:49.587061 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-17 03:37:49.587066 | orchestrator | Friday 17 April 2026 03:37:43 +0000 (0:00:00.972) 0:00:02.493 ********** 2026-04-17 03:37:49.587072 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:49.587078 | orchestrator | 2026-04-17 03:37:49.587083 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-17 03:37:49.587088 | orchestrator | Friday 17 April 2026 03:37:43 +0000 (0:00:00.353) 0:00:02.847 ********** 2026-04-17 03:37:49.587093 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:49.587098 | orchestrator | 2026-04-17 03:37:49.587103 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-17 03:37:49.587108 | orchestrator | Friday 17 April 2026 03:37:43 +0000 (0:00:00.357) 0:00:03.204 ********** 2026-04-17 03:37:49.587114 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:49.587119 | orchestrator | 2026-04-17 03:37:49.587124 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-17 03:37:49.587129 | orchestrator | Friday 17 April 2026 03:37:44 +0000 (0:00:00.347) 0:00:03.552 ********** 2026-04-17 03:37:49.587134 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:49.587139 | orchestrator | 2026-04-17 03:37:49.587144 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 03:37:49.587149 | orchestrator | Friday 17 April 2026 03:37:44 +0000 (0:00:00.520) 0:00:04.072 ********** 2026-04-17 03:37:49.587167 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:37:49.587173 | orchestrator | 2026-04-17 03:37:49.587178 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-17 03:37:49.587202 | orchestrator | Friday 17 April 2026 03:37:45 +0000 (0:00:00.948) 0:00:05.021 ********** 2026-04-17 03:37:49.587207 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:37:49.587212 | orchestrator | 2026-04-17 03:37:49.587217 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-17 03:37:49.587222 | orchestrator | Friday 17 April 2026 03:37:46 +0000 (0:00:00.788) 0:00:05.809 ********** 2026-04-17 03:37:49.587227 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:49.587232 | orchestrator | 2026-04-17 03:37:49.587237 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-17 03:37:49.587246 | orchestrator | Friday 17 April 2026 03:37:46 +0000 (0:00:00.383) 0:00:06.193 ********** 2026-04-17 03:37:49.587255 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:37:49.587268 | orchestrator | 2026-04-17 03:37:49.587277 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-17 03:37:49.587285 | orchestrator | Friday 17 April 2026 03:37:47 +0000 (0:00:00.344) 0:00:06.537 ********** 2026-04-17 03:37:49.587314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:37:49.587326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:37:49.587336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:37:49.587349 | orchestrator | 2026-04-17 03:37:49.587354 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-17 03:37:49.587364 | orchestrator | Friday 17 April 2026 03:37:47 +0000 (0:00:00.804) 0:00:07.342 ********** 2026-04-17 03:37:49.587370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:37:49.587382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:38:07.483451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:38:07.483556 | orchestrator | 2026-04-17 03:38:07.483569 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-17 03:38:07.483578 | orchestrator | Friday 17 April 2026 03:37:49 +0000 (0:00:01.592) 0:00:08.934 ********** 2026-04-17 03:38:07.483585 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 03:38:07.483616 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 03:38:07.483624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 03:38:07.483631 | orchestrator | 2026-04-17 03:38:07.483637 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-17 03:38:07.483643 | orchestrator | Friday 17 April 2026 03:37:51 +0000 (0:00:01.441) 0:00:10.376 ********** 2026-04-17 03:38:07.483650 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 03:38:07.483658 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 03:38:07.483677 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 03:38:07.483685 | orchestrator | 2026-04-17 03:38:07.483691 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-17 03:38:07.483698 | orchestrator | Friday 17 April 2026 03:37:52 +0000 (0:00:01.583) 0:00:11.960 ********** 2026-04-17 03:38:07.483705 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 03:38:07.483712 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 03:38:07.483719 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 03:38:07.483797 | orchestrator | 2026-04-17 03:38:07.483802 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-17 03:38:07.483806 | orchestrator | Friday 17 April 2026 03:37:53 +0000 (0:00:01.373) 0:00:13.333 ********** 2026-04-17 03:38:07.483811 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 03:38:07.483815 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 03:38:07.483819 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 03:38:07.483824 | orchestrator | 2026-04-17 03:38:07.483828 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-17 03:38:07.483833 | orchestrator | Friday 17 April 2026 03:37:55 +0000 (0:00:01.670) 0:00:15.004 ********** 2026-04-17 03:38:07.483837 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 03:38:07.483842 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 03:38:07.483846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 03:38:07.483850 | orchestrator | 2026-04-17 03:38:07.483855 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-17 03:38:07.483859 | orchestrator | Friday 17 April 2026 03:37:56 +0000 (0:00:01.333) 0:00:16.337 ********** 2026-04-17 03:38:07.483864 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 03:38:07.483868 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 03:38:07.483873 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 03:38:07.483877 | orchestrator | 2026-04-17 03:38:07.483882 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 03:38:07.483886 | orchestrator | Friday 17 April 2026 03:37:58 +0000 (0:00:01.373) 0:00:17.711 ********** 2026-04-17 03:38:07.483891 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:38:07.483896 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:38:07.483915 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:38:07.483920 | orchestrator | 2026-04-17 03:38:07.483924 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-17 03:38:07.483937 | orchestrator | Friday 17 April 2026 03:37:58 +0000 (0:00:00.386) 0:00:18.097 ********** 2026-04-17 03:38:07.483942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:38:07.483953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:38:07.483958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 03:38:07.483963 | orchestrator | 2026-04-17 03:38:07.483968 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-17 03:38:07.483972 | orchestrator | Friday 17 April 2026 03:37:59 +0000 (0:00:01.186) 0:00:19.283 ********** 2026-04-17 03:38:07.483977 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:38:07.483981 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:38:07.483986 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:38:07.483990 | orchestrator | 2026-04-17 03:38:07.483994 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-17 03:38:07.484000 | orchestrator | Friday 17 April 2026 03:38:00 +0000 (0:00:00.794) 0:00:20.078 ********** 2026-04-17 03:38:07.484010 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:38:07.484015 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:38:07.484020 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:38:07.484025 | orchestrator | 2026-04-17 03:38:07.484030 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-17 03:38:07.484038 | orchestrator | Friday 17 April 2026 03:38:07 +0000 (0:00:06.754) 0:00:26.832 ********** 2026-04-17 03:39:38.063489 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:39:38.063615 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:39:38.063630 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:39:38.063640 | orchestrator | 2026-04-17 03:39:38.063682 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 03:39:38.063704 | orchestrator | 2026-04-17 03:39:38.063714 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 03:39:38.063724 | orchestrator | Friday 17 April 2026 03:38:07 +0000 (0:00:00.482) 0:00:27.315 ********** 2026-04-17 03:39:38.063733 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:39:38.063743 | orchestrator | 2026-04-17 03:39:38.063752 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 03:39:38.063761 | orchestrator | Friday 17 April 2026 03:38:08 +0000 (0:00:00.607) 0:00:27.923 ********** 2026-04-17 03:39:38.063770 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:39:38.063779 | orchestrator | 2026-04-17 03:39:38.063788 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 03:39:38.063797 | orchestrator | Friday 17 April 2026 03:38:08 +0000 (0:00:00.237) 0:00:28.160 ********** 2026-04-17 03:39:38.063805 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:39:38.063905 | orchestrator | 2026-04-17 03:39:38.063919 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 03:39:38.063933 | orchestrator | Friday 17 April 2026 03:38:10 +0000 (0:00:01.578) 0:00:29.739 ********** 2026-04-17 03:39:38.063946 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:39:38.063961 | orchestrator | 2026-04-17 03:39:38.063975 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 03:39:38.063989 | orchestrator | 2026-04-17 03:39:38.064001 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 03:39:38.064014 | orchestrator | Friday 17 April 2026 03:39:03 +0000 (0:00:52.654) 0:01:22.393 ********** 2026-04-17 03:39:38.064028 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:39:38.064042 | orchestrator | 2026-04-17 03:39:38.064057 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 03:39:38.064072 | orchestrator | Friday 17 April 2026 03:39:03 +0000 (0:00:00.585) 0:01:22.978 ********** 2026-04-17 03:39:38.064088 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:39:38.064104 | orchestrator | 2026-04-17 03:39:38.064120 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 03:39:38.064135 | orchestrator | Friday 17 April 2026 03:39:03 +0000 (0:00:00.224) 0:01:23.203 ********** 2026-04-17 03:39:38.064151 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:39:38.064166 | orchestrator | 2026-04-17 03:39:38.064181 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 03:39:38.064192 | orchestrator | Friday 17 April 2026 03:39:05 +0000 (0:00:01.582) 0:01:24.785 ********** 2026-04-17 03:39:38.064203 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:39:38.064213 | orchestrator | 2026-04-17 03:39:38.064239 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 03:39:38.064250 | orchestrator | 2026-04-17 03:39:38.064260 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 03:39:38.064270 | orchestrator | Friday 17 April 2026 03:39:19 +0000 (0:00:14.340) 0:01:39.126 ********** 2026-04-17 03:39:38.064281 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:39:38.064291 | orchestrator | 2026-04-17 03:39:38.064301 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 03:39:38.064312 | orchestrator | Friday 17 April 2026 03:39:20 +0000 (0:00:00.770) 0:01:39.896 ********** 2026-04-17 03:39:38.064345 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:39:38.064355 | orchestrator | 2026-04-17 03:39:38.064366 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 03:39:38.064376 | orchestrator | Friday 17 April 2026 03:39:20 +0000 (0:00:00.230) 0:01:40.126 ********** 2026-04-17 03:39:38.064386 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:39:38.064397 | orchestrator | 2026-04-17 03:39:38.064406 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 03:39:38.064415 | orchestrator | Friday 17 April 2026 03:39:22 +0000 (0:00:01.619) 0:01:41.746 ********** 2026-04-17 03:39:38.064423 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:39:38.064432 | orchestrator | 2026-04-17 03:39:38.064440 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-17 03:39:38.064449 | orchestrator | 2026-04-17 03:39:38.064458 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-17 03:39:38.064466 | orchestrator | Friday 17 April 2026 03:39:35 +0000 (0:00:12.684) 0:01:54.430 ********** 2026-04-17 03:39:38.064475 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:39:38.064484 | orchestrator | 2026-04-17 03:39:38.064492 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-17 03:39:38.064536 | orchestrator | Friday 17 April 2026 03:39:35 +0000 (0:00:00.536) 0:01:54.967 ********** 2026-04-17 03:39:38.064558 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-17 03:39:38.064573 | orchestrator | enable_outward_rabbitmq_True 2026-04-17 03:39:38.064587 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-17 03:39:38.064601 | orchestrator | outward_rabbitmq_restart 2026-04-17 03:39:38.064614 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:39:38.064628 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:39:38.064642 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:39:38.064657 | orchestrator | 2026-04-17 03:39:38.064672 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-17 03:39:38.064686 | orchestrator | skipping: no hosts matched 2026-04-17 03:39:38.064701 | orchestrator | 2026-04-17 03:39:38.064712 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-17 03:39:38.064722 | orchestrator | skipping: no hosts matched 2026-04-17 03:39:38.064730 | orchestrator | 2026-04-17 03:39:38.064739 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-17 03:39:38.064748 | orchestrator | skipping: no hosts matched 2026-04-17 03:39:38.064756 | orchestrator | 2026-04-17 03:39:38.064765 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:39:38.064794 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-17 03:39:38.064805 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:39:38.064843 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:39:38.064852 | orchestrator | 2026-04-17 03:39:38.064860 | orchestrator | 2026-04-17 03:39:38.064869 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:39:38.064878 | orchestrator | Friday 17 April 2026 03:39:37 +0000 (0:00:02.055) 0:01:57.022 ********** 2026-04-17 03:39:38.064887 | orchestrator | =============================================================================== 2026-04-17 03:39:38.064895 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.68s 2026-04-17 03:39:38.064904 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.75s 2026-04-17 03:39:38.064913 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.78s 2026-04-17 03:39:38.064932 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.06s 2026-04-17 03:39:38.064941 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.96s 2026-04-17 03:39:38.064955 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.67s 2026-04-17 03:39:38.064973 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.59s 2026-04-17 03:39:38.064994 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.58s 2026-04-17 03:39:38.065008 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.44s 2026-04-17 03:39:38.065020 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.37s 2026-04-17 03:39:38.065033 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.37s 2026-04-17 03:39:38.065046 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.33s 2026-04-17 03:39:38.065059 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.19s 2026-04-17 03:39:38.065072 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.97s 2026-04-17 03:39:38.065085 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.95s 2026-04-17 03:39:38.065108 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.80s 2026-04-17 03:39:38.065120 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.79s 2026-04-17 03:39:38.065133 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.79s 2026-04-17 03:39:38.065146 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.69s 2026-04-17 03:39:38.065159 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.54s 2026-04-17 03:39:40.421871 | orchestrator | 2026-04-17 03:39:40 | INFO  | Task 5c266cec-a9fe-4e0e-8de4-f186979acc47 (openvswitch) was prepared for execution. 2026-04-17 03:39:40.421953 | orchestrator | 2026-04-17 03:39:40 | INFO  | It takes a moment until task 5c266cec-a9fe-4e0e-8de4-f186979acc47 (openvswitch) has been started and output is visible here. 2026-04-17 03:39:52.925956 | orchestrator | 2026-04-17 03:39:52.926105 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:39:52.926128 | orchestrator | 2026-04-17 03:39:52.926142 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 03:39:52.926156 | orchestrator | Friday 17 April 2026 03:39:44 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-04-17 03:39:52.926170 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:39:52.926185 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:39:52.926198 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:39:52.926211 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:39:52.926225 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:39:52.926238 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:39:52.926252 | orchestrator | 2026-04-17 03:39:52.926267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 03:39:52.926281 | orchestrator | Friday 17 April 2026 03:39:45 +0000 (0:00:00.678) 0:00:00.935 ********** 2026-04-17 03:39:52.926294 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 03:39:52.926308 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 03:39:52.926317 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 03:39:52.926325 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 03:39:52.926333 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 03:39:52.926341 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 03:39:52.926349 | orchestrator | 2026-04-17 03:39:52.926357 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-17 03:39:52.926365 | orchestrator | 2026-04-17 03:39:52.926396 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-17 03:39:52.926405 | orchestrator | Friday 17 April 2026 03:39:45 +0000 (0:00:00.579) 0:00:01.514 ********** 2026-04-17 03:39:52.926414 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:39:52.926426 | orchestrator | 2026-04-17 03:39:52.926440 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 03:39:52.926459 | orchestrator | Friday 17 April 2026 03:39:46 +0000 (0:00:01.194) 0:00:02.709 ********** 2026-04-17 03:39:52.926473 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-17 03:39:52.926486 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-17 03:39:52.926498 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-17 03:39:52.926512 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-17 03:39:52.926528 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-17 03:39:52.926541 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-17 03:39:52.926554 | orchestrator | 2026-04-17 03:39:52.926564 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 03:39:52.926573 | orchestrator | Friday 17 April 2026 03:39:48 +0000 (0:00:01.132) 0:00:03.841 ********** 2026-04-17 03:39:52.926582 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-17 03:39:52.926592 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-17 03:39:52.926604 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-17 03:39:52.926624 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-17 03:39:52.926640 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-17 03:39:52.926652 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-17 03:39:52.926664 | orchestrator | 2026-04-17 03:39:52.926676 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 03:39:52.926688 | orchestrator | Friday 17 April 2026 03:39:49 +0000 (0:00:01.442) 0:00:05.284 ********** 2026-04-17 03:39:52.926700 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-17 03:39:52.926712 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:39:52.926727 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-17 03:39:52.926740 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:39:52.926753 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-17 03:39:52.926766 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:39:52.926779 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-17 03:39:52.926792 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:39:52.926805 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-17 03:39:52.926817 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:39:52.926850 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-17 03:39:52.926862 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:39:52.926875 | orchestrator | 2026-04-17 03:39:52.926890 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-17 03:39:52.926904 | orchestrator | Friday 17 April 2026 03:39:50 +0000 (0:00:01.235) 0:00:06.519 ********** 2026-04-17 03:39:52.926916 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:39:52.926930 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:39:52.926944 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:39:52.926957 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:39:52.926970 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:39:52.926980 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:39:52.926988 | orchestrator | 2026-04-17 03:39:52.926996 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-17 03:39:52.927004 | orchestrator | Friday 17 April 2026 03:39:51 +0000 (0:00:00.831) 0:00:07.350 ********** 2026-04-17 03:39:52.927035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:52.927060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:52.927069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:52.927149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:52.927168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:52.927190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253731 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253737 | orchestrator | 2026-04-17 03:39:55.253743 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-17 03:39:55.253749 | orchestrator | Friday 17 April 2026 03:39:52 +0000 (0:00:01.353) 0:00:08.704 ********** 2026-04-17 03:39:55.253753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:55.253788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991400 | orchestrator | 2026-04-17 03:39:57.991411 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-17 03:39:57.991422 | orchestrator | Friday 17 April 2026 03:39:55 +0000 (0:00:02.339) 0:00:11.043 ********** 2026-04-17 03:39:57.991431 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:39:57.991442 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:39:57.991451 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:39:57.991460 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:39:57.991469 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:39:57.991478 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:39:57.991487 | orchestrator | 2026-04-17 03:39:57.991497 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-17 03:39:57.991524 | orchestrator | Friday 17 April 2026 03:39:56 +0000 (0:00:01.028) 0:00:12.071 ********** 2026-04-17 03:39:57.991542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:39:57.991618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:40:21.307437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 03:40:21.307577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:40:21.307599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:40:21.307670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:40:21.307690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:40:21.307725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:40:21.307741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 03:40:21.307755 | orchestrator | 2026-04-17 03:40:21.307771 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 03:40:21.307787 | orchestrator | Friday 17 April 2026 03:39:58 +0000 (0:00:01.705) 0:00:13.777 ********** 2026-04-17 03:40:21.307799 | orchestrator | 2026-04-17 03:40:21.307813 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 03:40:21.307827 | orchestrator | Friday 17 April 2026 03:39:58 +0000 (0:00:00.361) 0:00:14.138 ********** 2026-04-17 03:40:21.307840 | orchestrator | 2026-04-17 03:40:21.307939 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 03:40:21.307981 | orchestrator | Friday 17 April 2026 03:39:58 +0000 (0:00:00.134) 0:00:14.273 ********** 2026-04-17 03:40:21.307990 | orchestrator | 2026-04-17 03:40:21.308002 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 03:40:21.308016 | orchestrator | Friday 17 April 2026 03:39:58 +0000 (0:00:00.127) 0:00:14.400 ********** 2026-04-17 03:40:21.308028 | orchestrator | 2026-04-17 03:40:21.308041 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 03:40:21.308054 | orchestrator | Friday 17 April 2026 03:39:58 +0000 (0:00:00.128) 0:00:14.529 ********** 2026-04-17 03:40:21.308068 | orchestrator | 2026-04-17 03:40:21.308082 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 03:40:21.308095 | orchestrator | Friday 17 April 2026 03:39:58 +0000 (0:00:00.131) 0:00:14.661 ********** 2026-04-17 03:40:21.308109 | orchestrator | 2026-04-17 03:40:21.308122 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-17 03:40:21.308136 | orchestrator | Friday 17 April 2026 03:39:59 +0000 (0:00:00.142) 0:00:14.803 ********** 2026-04-17 03:40:21.308150 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:40:21.308164 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:40:21.308178 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:40:21.308191 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:40:21.308206 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:40:21.308219 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:40:21.308233 | orchestrator | 2026-04-17 03:40:21.308247 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-17 03:40:21.308262 | orchestrator | Friday 17 April 2026 03:40:06 +0000 (0:00:06.912) 0:00:21.715 ********** 2026-04-17 03:40:21.308274 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:40:21.308289 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:40:21.308302 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:40:21.308316 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:40:21.308338 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:40:21.308352 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:40:21.308364 | orchestrator | 2026-04-17 03:40:21.308377 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-17 03:40:21.308391 | orchestrator | Friday 17 April 2026 03:40:07 +0000 (0:00:01.071) 0:00:22.786 ********** 2026-04-17 03:40:21.308404 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:40:21.308417 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:40:21.308431 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:40:21.308445 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:40:21.308458 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:40:21.308472 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:40:21.308485 | orchestrator | 2026-04-17 03:40:21.308499 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-17 03:40:21.308512 | orchestrator | Friday 17 April 2026 03:40:15 +0000 (0:00:08.052) 0:00:30.838 ********** 2026-04-17 03:40:21.308526 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-17 03:40:21.308535 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-17 03:40:21.308543 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-17 03:40:21.308551 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-17 03:40:21.308559 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-17 03:40:21.308566 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-17 03:40:21.308574 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-17 03:40:21.308593 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-17 03:40:34.216182 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-17 03:40:34.216268 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-17 03:40:34.216274 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-17 03:40:34.216278 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-17 03:40:34.216282 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 03:40:34.216287 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 03:40:34.216291 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 03:40:34.216294 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 03:40:34.216306 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 03:40:34.216310 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 03:40:34.216314 | orchestrator | 2026-04-17 03:40:34.216318 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-17 03:40:34.216324 | orchestrator | Friday 17 April 2026 03:40:21 +0000 (0:00:06.159) 0:00:36.998 ********** 2026-04-17 03:40:34.216328 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-17 03:40:34.216333 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:40:34.216338 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-17 03:40:34.216342 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:40:34.216346 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-17 03:40:34.216349 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:40:34.216353 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-17 03:40:34.216357 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-17 03:40:34.216361 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-17 03:40:34.216365 | orchestrator | 2026-04-17 03:40:34.216369 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-17 03:40:34.216373 | orchestrator | Friday 17 April 2026 03:40:23 +0000 (0:00:02.345) 0:00:39.343 ********** 2026-04-17 03:40:34.216377 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-17 03:40:34.216381 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:40:34.216385 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-17 03:40:34.216389 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:40:34.216392 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-17 03:40:34.216396 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:40:34.216400 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-17 03:40:34.216404 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-17 03:40:34.216408 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-17 03:40:34.216412 | orchestrator | 2026-04-17 03:40:34.216427 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-17 03:40:34.216431 | orchestrator | Friday 17 April 2026 03:40:26 +0000 (0:00:02.985) 0:00:42.329 ********** 2026-04-17 03:40:34.216435 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:40:34.216438 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:40:34.216442 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:40:34.216446 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:40:34.216465 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:40:34.216470 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:40:34.216473 | orchestrator | 2026-04-17 03:40:34.216477 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:40:34.216482 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 03:40:34.216487 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 03:40:34.216491 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 03:40:34.216495 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 03:40:34.216499 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 03:40:34.216502 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 03:40:34.216506 | orchestrator | 2026-04-17 03:40:34.216510 | orchestrator | 2026-04-17 03:40:34.216514 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:40:34.216518 | orchestrator | Friday 17 April 2026 03:40:33 +0000 (0:00:07.137) 0:00:49.467 ********** 2026-04-17 03:40:34.216532 | orchestrator | =============================================================================== 2026-04-17 03:40:34.216536 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.19s 2026-04-17 03:40:34.216540 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.91s 2026-04-17 03:40:34.216543 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.16s 2026-04-17 03:40:34.216547 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.99s 2026-04-17 03:40:34.216551 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.35s 2026-04-17 03:40:34.216556 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.34s 2026-04-17 03:40:34.216562 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.71s 2026-04-17 03:40:34.216567 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.44s 2026-04-17 03:40:34.216573 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.35s 2026-04-17 03:40:34.216579 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.24s 2026-04-17 03:40:34.216585 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.19s 2026-04-17 03:40:34.216590 | orchestrator | module-load : Load modules ---------------------------------------------- 1.13s 2026-04-17 03:40:34.216596 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.07s 2026-04-17 03:40:34.216602 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.03s 2026-04-17 03:40:34.216607 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.03s 2026-04-17 03:40:34.216613 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.83s 2026-04-17 03:40:34.216619 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2026-04-17 03:40:34.216625 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2026-04-17 03:40:36.883437 | orchestrator | 2026-04-17 03:40:36 | INFO  | Task a15701e7-8416-4bac-937c-0f43e7eba2a3 (ovn) was prepared for execution. 2026-04-17 03:40:36.883537 | orchestrator | 2026-04-17 03:40:36 | INFO  | It takes a moment until task a15701e7-8416-4bac-937c-0f43e7eba2a3 (ovn) has been started and output is visible here. 2026-04-17 03:40:47.915781 | orchestrator | 2026-04-17 03:40:47.915950 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 03:40:47.915965 | orchestrator | 2026-04-17 03:40:47.915973 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 03:40:47.915980 | orchestrator | Friday 17 April 2026 03:40:41 +0000 (0:00:00.179) 0:00:00.179 ********** 2026-04-17 03:40:47.915987 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:40:47.915995 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:40:47.916001 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:40:47.916008 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:40:47.916014 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:40:47.916020 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:40:47.916026 | orchestrator | 2026-04-17 03:40:47.916032 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 03:40:47.916038 | orchestrator | Friday 17 April 2026 03:40:41 +0000 (0:00:00.777) 0:00:00.957 ********** 2026-04-17 03:40:47.916045 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-17 03:40:47.916052 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-17 03:40:47.916071 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-17 03:40:47.916077 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-17 03:40:47.916084 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-17 03:40:47.916090 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-17 03:40:47.916096 | orchestrator | 2026-04-17 03:40:47.916102 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-17 03:40:47.916109 | orchestrator | 2026-04-17 03:40:47.916115 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-17 03:40:47.916122 | orchestrator | Friday 17 April 2026 03:40:42 +0000 (0:00:00.836) 0:00:01.793 ********** 2026-04-17 03:40:47.916128 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:40:47.916136 | orchestrator | 2026-04-17 03:40:47.916142 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-17 03:40:47.916148 | orchestrator | Friday 17 April 2026 03:40:43 +0000 (0:00:01.097) 0:00:02.891 ********** 2026-04-17 03:40:47.916156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916232 | orchestrator | 2026-04-17 03:40:47.916238 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-17 03:40:47.916244 | orchestrator | Friday 17 April 2026 03:40:45 +0000 (0:00:01.274) 0:00:04.165 ********** 2026-04-17 03:40:47.916251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916336 | orchestrator | 2026-04-17 03:40:47.916348 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-17 03:40:47.916359 | orchestrator | Friday 17 April 2026 03:40:46 +0000 (0:00:01.504) 0:00:05.670 ********** 2026-04-17 03:40:47.916370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:40:47.916397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.204713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.204825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.204841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.204852 | orchestrator | 2026-04-17 03:41:12.204864 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-17 03:41:12.204876 | orchestrator | Friday 17 April 2026 03:40:47 +0000 (0:00:01.210) 0:00:06.880 ********** 2026-04-17 03:41:12.204886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.204897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205020 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205113 | orchestrator | 2026-04-17 03:41:12.205124 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-17 03:41:12.205133 | orchestrator | Friday 17 April 2026 03:40:49 +0000 (0:00:01.568) 0:00:08.448 ********** 2026-04-17 03:41:12.205151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:41:12.205221 | orchestrator | 2026-04-17 03:41:12.205232 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-17 03:41:12.205243 | orchestrator | Friday 17 April 2026 03:40:50 +0000 (0:00:01.323) 0:00:09.772 ********** 2026-04-17 03:41:12.205255 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:41:12.205268 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:41:12.205278 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:41:12.205289 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:41:12.205300 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:41:12.205310 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:41:12.205321 | orchestrator | 2026-04-17 03:41:12.205332 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-17 03:41:12.205343 | orchestrator | Friday 17 April 2026 03:40:53 +0000 (0:00:02.453) 0:00:12.225 ********** 2026-04-17 03:41:12.205354 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-17 03:41:12.205365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-17 03:41:12.205376 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-17 03:41:12.205388 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-17 03:41:12.205398 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-17 03:41:12.205409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-17 03:41:12.205427 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 03:41:54.064505 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 03:41:54.064627 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 03:41:54.064644 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 03:41:54.064652 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 03:41:54.064671 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 03:41:54.064677 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 03:41:54.064685 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 03:41:54.064690 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 03:41:54.064714 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 03:41:54.064720 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 03:41:54.064726 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 03:41:54.064732 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 03:41:54.064738 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 03:41:54.064744 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 03:41:54.064749 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 03:41:54.064758 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 03:41:54.064768 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 03:41:54.064776 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 03:41:54.064785 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 03:41:54.064794 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 03:41:54.064802 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 03:41:54.064811 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 03:41:54.064820 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 03:41:54.064828 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 03:41:54.064837 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 03:41:54.064846 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 03:41:54.064856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 03:41:54.064865 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 03:41:54.064874 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 03:41:54.064885 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 03:41:54.064890 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 03:41:54.064896 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 03:41:54.064901 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 03:41:54.064907 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 03:41:54.064912 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-17 03:41:54.064920 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-17 03:41:54.064968 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 03:41:54.064986 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-17 03:41:54.065001 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-17 03:41:54.065016 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-17 03:41:54.065025 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 03:41:54.065033 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 03:41:54.065042 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-17 03:41:54.065050 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 03:41:54.065059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 03:41:54.065069 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 03:41:54.065077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 03:41:54.065087 | orchestrator | 2026-04-17 03:41:54.065097 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 03:41:54.065107 | orchestrator | Friday 17 April 2026 03:41:11 +0000 (0:00:18.326) 0:00:30.551 ********** 2026-04-17 03:41:54.065116 | orchestrator | 2026-04-17 03:41:54.065125 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 03:41:54.065134 | orchestrator | Friday 17 April 2026 03:41:11 +0000 (0:00:00.235) 0:00:30.787 ********** 2026-04-17 03:41:54.065143 | orchestrator | 2026-04-17 03:41:54.065152 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 03:41:54.065161 | orchestrator | Friday 17 April 2026 03:41:11 +0000 (0:00:00.074) 0:00:30.861 ********** 2026-04-17 03:41:54.065171 | orchestrator | 2026-04-17 03:41:54.065180 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 03:41:54.065189 | orchestrator | Friday 17 April 2026 03:41:11 +0000 (0:00:00.089) 0:00:30.951 ********** 2026-04-17 03:41:54.065198 | orchestrator | 2026-04-17 03:41:54.065207 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 03:41:54.065215 | orchestrator | Friday 17 April 2026 03:41:12 +0000 (0:00:00.080) 0:00:31.031 ********** 2026-04-17 03:41:54.065224 | orchestrator | 2026-04-17 03:41:54.065233 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 03:41:54.065242 | orchestrator | Friday 17 April 2026 03:41:12 +0000 (0:00:00.065) 0:00:31.097 ********** 2026-04-17 03:41:54.065251 | orchestrator | 2026-04-17 03:41:54.065261 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-17 03:41:54.065270 | orchestrator | Friday 17 April 2026 03:41:12 +0000 (0:00:00.064) 0:00:31.161 ********** 2026-04-17 03:41:54.065279 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:41:54.065289 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:41:54.065297 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:41:54.065306 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:41:54.065315 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:41:54.065324 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:41:54.065333 | orchestrator | 2026-04-17 03:41:54.065343 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-17 03:41:54.065352 | orchestrator | Friday 17 April 2026 03:41:13 +0000 (0:00:01.471) 0:00:32.633 ********** 2026-04-17 03:41:54.065361 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:41:54.065370 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:41:54.065387 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:41:54.065396 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:41:54.065404 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:41:54.065412 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:41:54.065420 | orchestrator | 2026-04-17 03:41:54.065429 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-17 03:41:54.065437 | orchestrator | 2026-04-17 03:41:54.065447 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 03:41:54.065456 | orchestrator | Friday 17 April 2026 03:41:51 +0000 (0:00:38.075) 0:01:10.709 ********** 2026-04-17 03:41:54.065466 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:41:54.065475 | orchestrator | 2026-04-17 03:41:54.065484 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 03:41:54.065493 | orchestrator | Friday 17 April 2026 03:41:52 +0000 (0:00:00.700) 0:01:11.409 ********** 2026-04-17 03:41:54.065502 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:41:54.065511 | orchestrator | 2026-04-17 03:41:54.065520 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-17 03:41:54.065529 | orchestrator | Friday 17 April 2026 03:41:53 +0000 (0:00:00.576) 0:01:11.986 ********** 2026-04-17 03:41:54.065537 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:41:54.065547 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:41:54.065556 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:41:54.065566 | orchestrator | 2026-04-17 03:41:54.065574 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-17 03:41:54.065593 | orchestrator | Friday 17 April 2026 03:41:54 +0000 (0:00:01.037) 0:01:13.024 ********** 2026-04-17 03:42:05.149128 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:05.149257 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:05.149278 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:05.149290 | orchestrator | 2026-04-17 03:42:05.149306 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-17 03:42:05.149345 | orchestrator | Friday 17 April 2026 03:41:54 +0000 (0:00:00.360) 0:01:13.384 ********** 2026-04-17 03:42:05.149357 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:05.149388 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:05.149397 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:05.149404 | orchestrator | 2026-04-17 03:42:05.149412 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-17 03:42:05.149420 | orchestrator | Friday 17 April 2026 03:41:54 +0000 (0:00:00.339) 0:01:13.724 ********** 2026-04-17 03:42:05.149427 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:05.149435 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:05.149442 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:05.149449 | orchestrator | 2026-04-17 03:42:05.149456 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-17 03:42:05.149464 | orchestrator | Friday 17 April 2026 03:41:55 +0000 (0:00:00.344) 0:01:14.069 ********** 2026-04-17 03:42:05.149471 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:05.149478 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:05.149485 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:05.149493 | orchestrator | 2026-04-17 03:42:05.149507 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-17 03:42:05.149518 | orchestrator | Friday 17 April 2026 03:41:55 +0000 (0:00:00.581) 0:01:14.650 ********** 2026-04-17 03:42:05.149529 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.149541 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.149554 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.149567 | orchestrator | 2026-04-17 03:42:05.149579 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-17 03:42:05.149591 | orchestrator | Friday 17 April 2026 03:41:55 +0000 (0:00:00.292) 0:01:14.943 ********** 2026-04-17 03:42:05.149633 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.149644 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.149653 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.149661 | orchestrator | 2026-04-17 03:42:05.149670 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-17 03:42:05.149678 | orchestrator | Friday 17 April 2026 03:41:56 +0000 (0:00:00.314) 0:01:15.258 ********** 2026-04-17 03:42:05.149686 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.149695 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.149703 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.149711 | orchestrator | 2026-04-17 03:42:05.149719 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-17 03:42:05.149728 | orchestrator | Friday 17 April 2026 03:41:56 +0000 (0:00:00.314) 0:01:15.573 ********** 2026-04-17 03:42:05.149736 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.149744 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.149753 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.149761 | orchestrator | 2026-04-17 03:42:05.149769 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-17 03:42:05.149778 | orchestrator | Friday 17 April 2026 03:41:56 +0000 (0:00:00.275) 0:01:15.848 ********** 2026-04-17 03:42:05.149787 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.149795 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.149804 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.149812 | orchestrator | 2026-04-17 03:42:05.149821 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-17 03:42:05.149829 | orchestrator | Friday 17 April 2026 03:41:57 +0000 (0:00:00.547) 0:01:16.395 ********** 2026-04-17 03:42:05.149838 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.149847 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.149855 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.149863 | orchestrator | 2026-04-17 03:42:05.149872 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-17 03:42:05.149880 | orchestrator | Friday 17 April 2026 03:41:57 +0000 (0:00:00.305) 0:01:16.701 ********** 2026-04-17 03:42:05.149889 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.149897 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.149907 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.149919 | orchestrator | 2026-04-17 03:42:05.149935 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-17 03:42:05.149977 | orchestrator | Friday 17 April 2026 03:41:58 +0000 (0:00:00.328) 0:01:17.030 ********** 2026-04-17 03:42:05.149990 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150002 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150071 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150085 | orchestrator | 2026-04-17 03:42:05.150098 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-17 03:42:05.150110 | orchestrator | Friday 17 April 2026 03:41:58 +0000 (0:00:00.306) 0:01:17.337 ********** 2026-04-17 03:42:05.150122 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150134 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150146 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150158 | orchestrator | 2026-04-17 03:42:05.150169 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-17 03:42:05.150182 | orchestrator | Friday 17 April 2026 03:41:58 +0000 (0:00:00.482) 0:01:17.819 ********** 2026-04-17 03:42:05.150194 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150206 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150218 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150230 | orchestrator | 2026-04-17 03:42:05.150244 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-17 03:42:05.150256 | orchestrator | Friday 17 April 2026 03:41:59 +0000 (0:00:00.301) 0:01:18.120 ********** 2026-04-17 03:42:05.150268 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150294 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150307 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150320 | orchestrator | 2026-04-17 03:42:05.150332 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-17 03:42:05.150345 | orchestrator | Friday 17 April 2026 03:41:59 +0000 (0:00:00.305) 0:01:18.426 ********** 2026-04-17 03:42:05.150383 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150395 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150408 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150420 | orchestrator | 2026-04-17 03:42:05.150432 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 03:42:05.150444 | orchestrator | Friday 17 April 2026 03:41:59 +0000 (0:00:00.288) 0:01:18.714 ********** 2026-04-17 03:42:05.150465 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:42:05.150473 | orchestrator | 2026-04-17 03:42:05.150480 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-17 03:42:05.150487 | orchestrator | Friday 17 April 2026 03:42:00 +0000 (0:00:00.753) 0:01:19.468 ********** 2026-04-17 03:42:05.150494 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:05.150502 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:05.150509 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:05.150516 | orchestrator | 2026-04-17 03:42:05.150523 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-17 03:42:05.150531 | orchestrator | Friday 17 April 2026 03:42:00 +0000 (0:00:00.500) 0:01:19.969 ********** 2026-04-17 03:42:05.150539 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:05.150552 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:05.150563 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:05.150574 | orchestrator | 2026-04-17 03:42:05.150586 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-17 03:42:05.150598 | orchestrator | Friday 17 April 2026 03:42:01 +0000 (0:00:00.417) 0:01:20.386 ********** 2026-04-17 03:42:05.150612 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150625 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150637 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150649 | orchestrator | 2026-04-17 03:42:05.150660 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-17 03:42:05.150668 | orchestrator | Friday 17 April 2026 03:42:01 +0000 (0:00:00.528) 0:01:20.915 ********** 2026-04-17 03:42:05.150675 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150682 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150689 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150696 | orchestrator | 2026-04-17 03:42:05.150703 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-17 03:42:05.150711 | orchestrator | Friday 17 April 2026 03:42:02 +0000 (0:00:00.345) 0:01:21.261 ********** 2026-04-17 03:42:05.150718 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150725 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150732 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150739 | orchestrator | 2026-04-17 03:42:05.150747 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-17 03:42:05.150754 | orchestrator | Friday 17 April 2026 03:42:02 +0000 (0:00:00.375) 0:01:21.636 ********** 2026-04-17 03:42:05.150761 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150768 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150775 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150783 | orchestrator | 2026-04-17 03:42:05.150790 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-17 03:42:05.150797 | orchestrator | Friday 17 April 2026 03:42:02 +0000 (0:00:00.316) 0:01:21.953 ********** 2026-04-17 03:42:05.150805 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150812 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150833 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150841 | orchestrator | 2026-04-17 03:42:05.150848 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-17 03:42:05.150857 | orchestrator | Friday 17 April 2026 03:42:03 +0000 (0:00:00.319) 0:01:22.272 ********** 2026-04-17 03:42:05.150869 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:05.150877 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:05.150884 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:05.150891 | orchestrator | 2026-04-17 03:42:05.150898 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-17 03:42:05.150905 | orchestrator | Friday 17 April 2026 03:42:03 +0000 (0:00:00.489) 0:01:22.761 ********** 2026-04-17 03:42:05.150915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:05.150926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:05.150933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:05.150976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162308 | orchestrator | 2026-04-17 03:42:11.162313 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-17 03:42:11.162319 | orchestrator | Friday 17 April 2026 03:42:05 +0000 (0:00:01.351) 0:01:24.113 ********** 2026-04-17 03:42:11.162324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162386 | orchestrator | 2026-04-17 03:42:11.162390 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-17 03:42:11.162394 | orchestrator | Friday 17 April 2026 03:42:08 +0000 (0:00:03.604) 0:01:27.717 ********** 2026-04-17 03:42:11.162398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:11.162426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.271579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.271723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.271784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.271807 | orchestrator | 2026-04-17 03:42:25.271828 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 03:42:25.271849 | orchestrator | Friday 17 April 2026 03:42:10 +0000 (0:00:01.999) 0:01:29.717 ********** 2026-04-17 03:42:25.271868 | orchestrator | 2026-04-17 03:42:25.271886 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 03:42:25.271905 | orchestrator | Friday 17 April 2026 03:42:10 +0000 (0:00:00.074) 0:01:29.792 ********** 2026-04-17 03:42:25.271922 | orchestrator | 2026-04-17 03:42:25.271940 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 03:42:25.271958 | orchestrator | Friday 17 April 2026 03:42:11 +0000 (0:00:00.263) 0:01:30.056 ********** 2026-04-17 03:42:25.272014 | orchestrator | 2026-04-17 03:42:25.272033 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-17 03:42:25.272052 | orchestrator | Friday 17 April 2026 03:42:11 +0000 (0:00:00.065) 0:01:30.121 ********** 2026-04-17 03:42:25.272070 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:42:25.272092 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:42:25.272111 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:42:25.272131 | orchestrator | 2026-04-17 03:42:25.272150 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-17 03:42:25.272169 | orchestrator | Friday 17 April 2026 03:42:13 +0000 (0:00:02.389) 0:01:32.510 ********** 2026-04-17 03:42:25.272187 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:42:25.272206 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:42:25.272224 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:42:25.272243 | orchestrator | 2026-04-17 03:42:25.272262 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-17 03:42:25.272281 | orchestrator | Friday 17 April 2026 03:42:15 +0000 (0:00:02.397) 0:01:34.908 ********** 2026-04-17 03:42:25.272300 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:42:25.272319 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:42:25.272337 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:42:25.272354 | orchestrator | 2026-04-17 03:42:25.272373 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-17 03:42:25.272390 | orchestrator | Friday 17 April 2026 03:42:18 +0000 (0:00:02.331) 0:01:37.240 ********** 2026-04-17 03:42:25.272407 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:25.272426 | orchestrator | 2026-04-17 03:42:25.272445 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-17 03:42:25.272463 | orchestrator | Friday 17 April 2026 03:42:18 +0000 (0:00:00.121) 0:01:37.361 ********** 2026-04-17 03:42:25.272503 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:25.272523 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:25.272557 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:25.272576 | orchestrator | 2026-04-17 03:42:25.272595 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-17 03:42:25.272614 | orchestrator | Friday 17 April 2026 03:42:19 +0000 (0:00:01.033) 0:01:38.395 ********** 2026-04-17 03:42:25.272631 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:25.272650 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:25.272669 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:42:25.272706 | orchestrator | 2026-04-17 03:42:25.272725 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-17 03:42:25.272745 | orchestrator | Friday 17 April 2026 03:42:20 +0000 (0:00:00.622) 0:01:39.018 ********** 2026-04-17 03:42:25.272762 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:25.272781 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:25.272792 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:25.272803 | orchestrator | 2026-04-17 03:42:25.272814 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-17 03:42:25.272824 | orchestrator | Friday 17 April 2026 03:42:20 +0000 (0:00:00.823) 0:01:39.841 ********** 2026-04-17 03:42:25.272835 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:25.272846 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:25.272872 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:42:25.272883 | orchestrator | 2026-04-17 03:42:25.272894 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-17 03:42:25.272905 | orchestrator | Friday 17 April 2026 03:42:21 +0000 (0:00:00.565) 0:01:40.406 ********** 2026-04-17 03:42:25.272915 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:25.272926 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:25.272959 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:25.273007 | orchestrator | 2026-04-17 03:42:25.273019 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-17 03:42:25.273030 | orchestrator | Friday 17 April 2026 03:42:22 +0000 (0:00:01.373) 0:01:41.780 ********** 2026-04-17 03:42:25.273051 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:25.273070 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:25.273080 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:25.273089 | orchestrator | 2026-04-17 03:42:25.273099 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-17 03:42:25.273109 | orchestrator | Friday 17 April 2026 03:42:23 +0000 (0:00:00.735) 0:01:42.515 ********** 2026-04-17 03:42:25.273119 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:25.273129 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:25.273138 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:25.273148 | orchestrator | 2026-04-17 03:42:25.273158 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-17 03:42:25.273167 | orchestrator | Friday 17 April 2026 03:42:23 +0000 (0:00:00.312) 0:01:42.827 ********** 2026-04-17 03:42:25.273179 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.273191 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.273201 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.273211 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.273230 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.273240 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.273250 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.273265 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:25.273285 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262673 | orchestrator | 2026-04-17 03:42:32.262747 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-17 03:42:32.262754 | orchestrator | Friday 17 April 2026 03:42:25 +0000 (0:00:01.401) 0:01:44.228 ********** 2026-04-17 03:42:32.262760 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262767 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262771 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262775 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262828 | orchestrator | 2026-04-17 03:42:32.262832 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-17 03:42:32.262836 | orchestrator | Friday 17 April 2026 03:42:29 +0000 (0:00:03.814) 0:01:48.043 ********** 2026-04-17 03:42:32.262850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262854 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262858 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262862 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262877 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 03:42:32.262889 | orchestrator | 2026-04-17 03:42:32.262895 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 03:42:32.262899 | orchestrator | Friday 17 April 2026 03:42:32 +0000 (0:00:02.975) 0:01:51.019 ********** 2026-04-17 03:42:32.262903 | orchestrator | 2026-04-17 03:42:32.262907 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 03:42:32.262910 | orchestrator | Friday 17 April 2026 03:42:32 +0000 (0:00:00.068) 0:01:51.088 ********** 2026-04-17 03:42:32.262914 | orchestrator | 2026-04-17 03:42:32.262918 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 03:42:32.262922 | orchestrator | Friday 17 April 2026 03:42:32 +0000 (0:00:00.067) 0:01:51.156 ********** 2026-04-17 03:42:32.262925 | orchestrator | 2026-04-17 03:42:32.262932 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-17 03:42:56.078839 | orchestrator | Friday 17 April 2026 03:42:32 +0000 (0:00:00.067) 0:01:51.223 ********** 2026-04-17 03:42:56.078938 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:42:56.078949 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:42:56.078956 | orchestrator | 2026-04-17 03:42:56.078963 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-17 03:42:56.078969 | orchestrator | Friday 17 April 2026 03:42:38 +0000 (0:00:06.083) 0:01:57.306 ********** 2026-04-17 03:42:56.078975 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:42:56.079048 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:42:56.079054 | orchestrator | 2026-04-17 03:42:56.079060 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-17 03:42:56.079066 | orchestrator | Friday 17 April 2026 03:42:44 +0000 (0:00:06.119) 0:02:03.426 ********** 2026-04-17 03:42:56.079092 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:42:56.079098 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:42:56.079103 | orchestrator | 2026-04-17 03:42:56.079109 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-17 03:42:56.079115 | orchestrator | Friday 17 April 2026 03:42:50 +0000 (0:00:06.028) 0:02:09.454 ********** 2026-04-17 03:42:56.079121 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:42:56.079127 | orchestrator | 2026-04-17 03:42:56.079133 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-17 03:42:56.079138 | orchestrator | Friday 17 April 2026 03:42:50 +0000 (0:00:00.134) 0:02:09.589 ********** 2026-04-17 03:42:56.079144 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:56.079151 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:56.079157 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:56.079162 | orchestrator | 2026-04-17 03:42:56.079168 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-17 03:42:56.079174 | orchestrator | Friday 17 April 2026 03:42:51 +0000 (0:00:01.041) 0:02:10.630 ********** 2026-04-17 03:42:56.079243 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:56.079250 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:56.079256 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:42:56.079262 | orchestrator | 2026-04-17 03:42:56.079267 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-17 03:42:56.079273 | orchestrator | Friday 17 April 2026 03:42:52 +0000 (0:00:00.631) 0:02:11.261 ********** 2026-04-17 03:42:56.079279 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:56.079285 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:56.079291 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:56.079297 | orchestrator | 2026-04-17 03:42:56.079303 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-17 03:42:56.079309 | orchestrator | Friday 17 April 2026 03:42:53 +0000 (0:00:00.789) 0:02:12.051 ********** 2026-04-17 03:42:56.079314 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:42:56.079320 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:42:56.079326 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:42:56.079331 | orchestrator | 2026-04-17 03:42:56.079337 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-17 03:42:56.079343 | orchestrator | Friday 17 April 2026 03:42:53 +0000 (0:00:00.642) 0:02:12.693 ********** 2026-04-17 03:42:56.079349 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:56.079354 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:56.079360 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:56.079366 | orchestrator | 2026-04-17 03:42:56.079371 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-17 03:42:56.079377 | orchestrator | Friday 17 April 2026 03:42:54 +0000 (0:00:01.039) 0:02:13.732 ********** 2026-04-17 03:42:56.079383 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:42:56.079390 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:42:56.079396 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:42:56.079403 | orchestrator | 2026-04-17 03:42:56.079410 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:42:56.079418 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-17 03:42:56.079428 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-17 03:42:56.079438 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-17 03:42:56.079448 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:42:56.079458 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:42:56.079477 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 03:42:56.079486 | orchestrator | 2026-04-17 03:42:56.079496 | orchestrator | 2026-04-17 03:42:56.079505 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:42:56.079530 | orchestrator | Friday 17 April 2026 03:42:55 +0000 (0:00:00.939) 0:02:14.672 ********** 2026-04-17 03:42:56.079540 | orchestrator | =============================================================================== 2026-04-17 03:42:56.079550 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 38.08s 2026-04-17 03:42:56.079559 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.33s 2026-04-17 03:42:56.079569 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.52s 2026-04-17 03:42:56.079578 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.47s 2026-04-17 03:42:56.079587 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.36s 2026-04-17 03:42:56.079616 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.81s 2026-04-17 03:42:56.079626 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.60s 2026-04-17 03:42:56.079635 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.98s 2026-04-17 03:42:56.079644 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.45s 2026-04-17 03:42:56.079650 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.00s 2026-04-17 03:42:56.079655 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.57s 2026-04-17 03:42:56.079673 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.50s 2026-04-17 03:42:56.079686 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.47s 2026-04-17 03:42:56.079692 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2026-04-17 03:42:56.079698 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.37s 2026-04-17 03:42:56.079704 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.35s 2026-04-17 03:42:56.079709 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.32s 2026-04-17 03:42:56.079715 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.27s 2026-04-17 03:42:56.079720 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.21s 2026-04-17 03:42:56.079726 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.10s 2026-04-17 03:42:56.418652 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 03:42:56.418735 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-04-17 03:42:58.565925 | orchestrator | 2026-04-17 03:42:58 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-17 03:43:08.697610 | orchestrator | 2026-04-17 03:43:08 | INFO  | Task 2f71bff0-59d8-4b84-a231-0157a79a86d7 (wipe-partitions) was prepared for execution. 2026-04-17 03:43:08.697715 | orchestrator | 2026-04-17 03:43:08 | INFO  | It takes a moment until task 2f71bff0-59d8-4b84-a231-0157a79a86d7 (wipe-partitions) has been started and output is visible here. 2026-04-17 03:43:21.293092 | orchestrator | 2026-04-17 03:43:21.293176 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-17 03:43:21.293183 | orchestrator | 2026-04-17 03:43:21.293188 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-17 03:43:21.293192 | orchestrator | Friday 17 April 2026 03:43:12 +0000 (0:00:00.141) 0:00:00.141 ********** 2026-04-17 03:43:21.293196 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:43:21.293202 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:43:21.293222 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:43:21.293226 | orchestrator | 2026-04-17 03:43:21.293230 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-17 03:43:21.293234 | orchestrator | Friday 17 April 2026 03:43:13 +0000 (0:00:00.581) 0:00:00.723 ********** 2026-04-17 03:43:21.293238 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:43:21.293241 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:43:21.293245 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:43:21.293249 | orchestrator | 2026-04-17 03:43:21.293253 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-17 03:43:21.293257 | orchestrator | Friday 17 April 2026 03:43:13 +0000 (0:00:00.384) 0:00:01.108 ********** 2026-04-17 03:43:21.293261 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:43:21.293266 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:43:21.293270 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:43:21.293273 | orchestrator | 2026-04-17 03:43:21.293277 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-17 03:43:21.293281 | orchestrator | Friday 17 April 2026 03:43:14 +0000 (0:00:00.596) 0:00:01.705 ********** 2026-04-17 03:43:21.293285 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:43:21.293288 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:43:21.293292 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:43:21.293296 | orchestrator | 2026-04-17 03:43:21.293300 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-17 03:43:21.293304 | orchestrator | Friday 17 April 2026 03:43:14 +0000 (0:00:00.265) 0:00:01.970 ********** 2026-04-17 03:43:21.293308 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-17 03:43:21.293313 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-17 03:43:21.293316 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-17 03:43:21.293320 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-17 03:43:21.293324 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-17 03:43:21.293328 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-17 03:43:21.293331 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-17 03:43:21.293335 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-17 03:43:21.293349 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-17 03:43:21.293352 | orchestrator | 2026-04-17 03:43:21.293356 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-17 03:43:21.293360 | orchestrator | Friday 17 April 2026 03:43:15 +0000 (0:00:01.212) 0:00:03.183 ********** 2026-04-17 03:43:21.293364 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-17 03:43:21.293368 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-17 03:43:21.293372 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-17 03:43:21.293375 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-17 03:43:21.293379 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-17 03:43:21.293383 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-17 03:43:21.293386 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-17 03:43:21.293390 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-17 03:43:21.293394 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-17 03:43:21.293398 | orchestrator | 2026-04-17 03:43:21.293401 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-17 03:43:21.293405 | orchestrator | Friday 17 April 2026 03:43:17 +0000 (0:00:01.591) 0:00:04.775 ********** 2026-04-17 03:43:21.293409 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-17 03:43:21.293413 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-17 03:43:21.293417 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-17 03:43:21.293420 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-17 03:43:21.293424 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-17 03:43:21.293428 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-17 03:43:21.293436 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-17 03:43:21.293439 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-17 03:43:21.293443 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-17 03:43:21.293447 | orchestrator | 2026-04-17 03:43:21.293451 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-17 03:43:21.293454 | orchestrator | Friday 17 April 2026 03:43:19 +0000 (0:00:02.095) 0:00:06.870 ********** 2026-04-17 03:43:21.293458 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:43:21.293462 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:43:21.293466 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:43:21.293469 | orchestrator | 2026-04-17 03:43:21.293473 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-17 03:43:21.293477 | orchestrator | Friday 17 April 2026 03:43:20 +0000 (0:00:00.584) 0:00:07.454 ********** 2026-04-17 03:43:21.293481 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:43:21.293484 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:43:21.293488 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:43:21.293492 | orchestrator | 2026-04-17 03:43:21.293496 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:43:21.293500 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:21.293505 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:21.293519 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:21.293523 | orchestrator | 2026-04-17 03:43:21.293527 | orchestrator | 2026-04-17 03:43:21.293531 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:43:21.293535 | orchestrator | Friday 17 April 2026 03:43:20 +0000 (0:00:00.668) 0:00:08.123 ********** 2026-04-17 03:43:21.293539 | orchestrator | =============================================================================== 2026-04-17 03:43:21.293542 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.10s 2026-04-17 03:43:21.293546 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.59s 2026-04-17 03:43:21.293550 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2026-04-17 03:43:21.293554 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2026-04-17 03:43:21.293557 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-04-17 03:43:21.293561 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-04-17 03:43:21.293565 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-04-17 03:43:21.293569 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-04-17 03:43:21.293573 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2026-04-17 03:43:33.745078 | orchestrator | 2026-04-17 03:43:33 | INFO  | Task 8e210498-704b-4675-a0b9-b2358adfa9f9 (facts) was prepared for execution. 2026-04-17 03:43:33.745203 | orchestrator | 2026-04-17 03:43:33 | INFO  | It takes a moment until task 8e210498-704b-4675-a0b9-b2358adfa9f9 (facts) has been started and output is visible here. 2026-04-17 03:43:46.530209 | orchestrator | 2026-04-17 03:43:46.530316 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-17 03:43:46.530332 | orchestrator | 2026-04-17 03:43:46.530341 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 03:43:46.530351 | orchestrator | Friday 17 April 2026 03:43:37 +0000 (0:00:00.270) 0:00:00.270 ********** 2026-04-17 03:43:46.530360 | orchestrator | ok: [testbed-manager] 2026-04-17 03:43:46.530393 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:43:46.530402 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:43:46.530411 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:43:46.530419 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:43:46.530428 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:43:46.530436 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:43:46.530445 | orchestrator | 2026-04-17 03:43:46.530454 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 03:43:46.530463 | orchestrator | Friday 17 April 2026 03:43:38 +0000 (0:00:01.022) 0:00:01.293 ********** 2026-04-17 03:43:46.530472 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:43:46.530482 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:43:46.530490 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:43:46.530499 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:43:46.530508 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:43:46.530516 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:43:46.530525 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:43:46.530533 | orchestrator | 2026-04-17 03:43:46.530542 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 03:43:46.530551 | orchestrator | 2026-04-17 03:43:46.530562 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 03:43:46.530572 | orchestrator | Friday 17 April 2026 03:43:39 +0000 (0:00:01.220) 0:00:02.514 ********** 2026-04-17 03:43:46.530582 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:43:46.530592 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:43:46.530602 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:43:46.530612 | orchestrator | ok: [testbed-manager] 2026-04-17 03:43:46.530621 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:43:46.530632 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:43:46.530641 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:43:46.530651 | orchestrator | 2026-04-17 03:43:46.530661 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 03:43:46.530670 | orchestrator | 2026-04-17 03:43:46.530679 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 03:43:46.530687 | orchestrator | Friday 17 April 2026 03:43:45 +0000 (0:00:05.755) 0:00:08.269 ********** 2026-04-17 03:43:46.530696 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:43:46.530704 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:43:46.530713 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:43:46.530721 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:43:46.530730 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:43:46.530738 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:43:46.530747 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:43:46.530755 | orchestrator | 2026-04-17 03:43:46.530764 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:43:46.530773 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:46.530856 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:46.530873 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:46.530881 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:46.530890 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:46.530899 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:46.530907 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:43:46.530925 | orchestrator | 2026-04-17 03:43:46.530934 | orchestrator | 2026-04-17 03:43:46.530942 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:43:46.530951 | orchestrator | Friday 17 April 2026 03:43:46 +0000 (0:00:00.522) 0:00:08.792 ********** 2026-04-17 03:43:46.530960 | orchestrator | =============================================================================== 2026-04-17 03:43:46.530968 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.76s 2026-04-17 03:43:46.530977 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2026-04-17 03:43:46.530985 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2026-04-17 03:43:46.531025 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-04-17 03:43:48.557535 | orchestrator | 2026-04-17 03:43:48 | INFO  | Task ce2d9307-4459-4578-b378-0e9e04a4bd6e (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-17 03:43:48.557648 | orchestrator | 2026-04-17 03:43:48 | INFO  | It takes a moment until task ce2d9307-4459-4578-b378-0e9e04a4bd6e (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-17 03:44:01.633050 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 03:44:01.633142 | orchestrator | 2.16.14 2026-04-17 03:44:01.633153 | orchestrator | 2026-04-17 03:44:01.633160 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-17 03:44:01.633166 | orchestrator | 2026-04-17 03:44:01.633172 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 03:44:01.633177 | orchestrator | Friday 17 April 2026 03:43:52 +0000 (0:00:00.333) 0:00:00.333 ********** 2026-04-17 03:44:01.633183 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 03:44:01.633189 | orchestrator | 2026-04-17 03:44:01.633194 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 03:44:01.633211 | orchestrator | Friday 17 April 2026 03:43:53 +0000 (0:00:00.275) 0:00:00.609 ********** 2026-04-17 03:44:01.633217 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:44:01.633223 | orchestrator | 2026-04-17 03:44:01.633228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633233 | orchestrator | Friday 17 April 2026 03:43:53 +0000 (0:00:00.248) 0:00:00.857 ********** 2026-04-17 03:44:01.633238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-17 03:44:01.633244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-17 03:44:01.633249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-17 03:44:01.633256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-17 03:44:01.633265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-17 03:44:01.633272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-17 03:44:01.633280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-17 03:44:01.633288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-17 03:44:01.633296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-17 03:44:01.633304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-17 03:44:01.633312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-17 03:44:01.633320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-17 03:44:01.633329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-17 03:44:01.633359 | orchestrator | 2026-04-17 03:44:01.633369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633377 | orchestrator | Friday 17 April 2026 03:43:53 +0000 (0:00:00.514) 0:00:01.372 ********** 2026-04-17 03:44:01.633385 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633393 | orchestrator | 2026-04-17 03:44:01.633402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633411 | orchestrator | Friday 17 April 2026 03:43:54 +0000 (0:00:00.209) 0:00:01.581 ********** 2026-04-17 03:44:01.633419 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633427 | orchestrator | 2026-04-17 03:44:01.633433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633438 | orchestrator | Friday 17 April 2026 03:43:54 +0000 (0:00:00.261) 0:00:01.842 ********** 2026-04-17 03:44:01.633443 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633448 | orchestrator | 2026-04-17 03:44:01.633453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633458 | orchestrator | Friday 17 April 2026 03:43:54 +0000 (0:00:00.228) 0:00:02.071 ********** 2026-04-17 03:44:01.633463 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633468 | orchestrator | 2026-04-17 03:44:01.633473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633478 | orchestrator | Friday 17 April 2026 03:43:54 +0000 (0:00:00.235) 0:00:02.307 ********** 2026-04-17 03:44:01.633483 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633488 | orchestrator | 2026-04-17 03:44:01.633493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633498 | orchestrator | Friday 17 April 2026 03:43:55 +0000 (0:00:00.214) 0:00:02.521 ********** 2026-04-17 03:44:01.633503 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633508 | orchestrator | 2026-04-17 03:44:01.633513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633518 | orchestrator | Friday 17 April 2026 03:43:55 +0000 (0:00:00.210) 0:00:02.732 ********** 2026-04-17 03:44:01.633523 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633528 | orchestrator | 2026-04-17 03:44:01.633533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633538 | orchestrator | Friday 17 April 2026 03:43:55 +0000 (0:00:00.259) 0:00:02.991 ********** 2026-04-17 03:44:01.633543 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633548 | orchestrator | 2026-04-17 03:44:01.633553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633558 | orchestrator | Friday 17 April 2026 03:43:55 +0000 (0:00:00.232) 0:00:03.223 ********** 2026-04-17 03:44:01.633563 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d) 2026-04-17 03:44:01.633570 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d) 2026-04-17 03:44:01.633575 | orchestrator | 2026-04-17 03:44:01.633580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633597 | orchestrator | Friday 17 April 2026 03:43:56 +0000 (0:00:00.704) 0:00:03.927 ********** 2026-04-17 03:44:01.633602 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c) 2026-04-17 03:44:01.633607 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c) 2026-04-17 03:44:01.633612 | orchestrator | 2026-04-17 03:44:01.633618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633623 | orchestrator | Friday 17 April 2026 03:43:57 +0000 (0:00:00.816) 0:00:04.744 ********** 2026-04-17 03:44:01.633628 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098) 2026-04-17 03:44:01.633638 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098) 2026-04-17 03:44:01.633649 | orchestrator | 2026-04-17 03:44:01.633655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633660 | orchestrator | Friday 17 April 2026 03:43:58 +0000 (0:00:01.039) 0:00:05.783 ********** 2026-04-17 03:44:01.633665 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b) 2026-04-17 03:44:01.633670 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b) 2026-04-17 03:44:01.633675 | orchestrator | 2026-04-17 03:44:01.633680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:01.633685 | orchestrator | Friday 17 April 2026 03:43:58 +0000 (0:00:00.489) 0:00:06.272 ********** 2026-04-17 03:44:01.633690 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 03:44:01.633695 | orchestrator | 2026-04-17 03:44:01.633700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:01.633705 | orchestrator | Friday 17 April 2026 03:43:59 +0000 (0:00:00.372) 0:00:06.645 ********** 2026-04-17 03:44:01.633710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-17 03:44:01.633715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-17 03:44:01.633720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-17 03:44:01.633725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-17 03:44:01.633730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-17 03:44:01.633735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-17 03:44:01.633740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-17 03:44:01.633746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-17 03:44:01.633754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-17 03:44:01.633761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-17 03:44:01.633769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-17 03:44:01.633777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-17 03:44:01.633784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-17 03:44:01.633792 | orchestrator | 2026-04-17 03:44:01.633800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:01.633808 | orchestrator | Friday 17 April 2026 03:43:59 +0000 (0:00:00.447) 0:00:07.093 ********** 2026-04-17 03:44:01.633813 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633818 | orchestrator | 2026-04-17 03:44:01.633823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:01.633828 | orchestrator | Friday 17 April 2026 03:43:59 +0000 (0:00:00.225) 0:00:07.318 ********** 2026-04-17 03:44:01.633833 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633838 | orchestrator | 2026-04-17 03:44:01.633843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:01.633848 | orchestrator | Friday 17 April 2026 03:44:00 +0000 (0:00:00.223) 0:00:07.542 ********** 2026-04-17 03:44:01.633853 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633858 | orchestrator | 2026-04-17 03:44:01.633863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:01.633868 | orchestrator | Friday 17 April 2026 03:44:00 +0000 (0:00:00.215) 0:00:07.758 ********** 2026-04-17 03:44:01.633873 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633878 | orchestrator | 2026-04-17 03:44:01.633883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:01.633893 | orchestrator | Friday 17 April 2026 03:44:00 +0000 (0:00:00.224) 0:00:07.982 ********** 2026-04-17 03:44:01.633898 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633903 | orchestrator | 2026-04-17 03:44:01.633908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:01.633913 | orchestrator | Friday 17 April 2026 03:44:00 +0000 (0:00:00.223) 0:00:08.206 ********** 2026-04-17 03:44:01.633918 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633923 | orchestrator | 2026-04-17 03:44:01.633928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:01.633933 | orchestrator | Friday 17 April 2026 03:44:01 +0000 (0:00:00.648) 0:00:08.854 ********** 2026-04-17 03:44:01.633938 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:01.633943 | orchestrator | 2026-04-17 03:44:01.633953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:09.336827 | orchestrator | Friday 17 April 2026 03:44:01 +0000 (0:00:00.215) 0:00:09.070 ********** 2026-04-17 03:44:09.336932 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.336944 | orchestrator | 2026-04-17 03:44:09.336960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:09.336967 | orchestrator | Friday 17 April 2026 03:44:01 +0000 (0:00:00.213) 0:00:09.284 ********** 2026-04-17 03:44:09.336982 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-17 03:44:09.337084 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-17 03:44:09.337094 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-17 03:44:09.337101 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-17 03:44:09.337107 | orchestrator | 2026-04-17 03:44:09.337126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:09.337132 | orchestrator | Friday 17 April 2026 03:44:02 +0000 (0:00:00.707) 0:00:09.991 ********** 2026-04-17 03:44:09.337138 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337144 | orchestrator | 2026-04-17 03:44:09.337150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:09.337156 | orchestrator | Friday 17 April 2026 03:44:02 +0000 (0:00:00.220) 0:00:10.212 ********** 2026-04-17 03:44:09.337162 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337168 | orchestrator | 2026-04-17 03:44:09.337174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:09.337180 | orchestrator | Friday 17 April 2026 03:44:02 +0000 (0:00:00.220) 0:00:10.432 ********** 2026-04-17 03:44:09.337186 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337192 | orchestrator | 2026-04-17 03:44:09.337198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:09.337204 | orchestrator | Friday 17 April 2026 03:44:03 +0000 (0:00:00.234) 0:00:10.667 ********** 2026-04-17 03:44:09.337209 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337215 | orchestrator | 2026-04-17 03:44:09.337221 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-17 03:44:09.337227 | orchestrator | Friday 17 April 2026 03:44:03 +0000 (0:00:00.237) 0:00:10.905 ********** 2026-04-17 03:44:09.337233 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-17 03:44:09.337239 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-17 03:44:09.337245 | orchestrator | 2026-04-17 03:44:09.337251 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-17 03:44:09.337257 | orchestrator | Friday 17 April 2026 03:44:03 +0000 (0:00:00.188) 0:00:11.093 ********** 2026-04-17 03:44:09.337263 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337269 | orchestrator | 2026-04-17 03:44:09.337274 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-17 03:44:09.337280 | orchestrator | Friday 17 April 2026 03:44:03 +0000 (0:00:00.179) 0:00:11.273 ********** 2026-04-17 03:44:09.337286 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337310 | orchestrator | 2026-04-17 03:44:09.337316 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-17 03:44:09.337322 | orchestrator | Friday 17 April 2026 03:44:03 +0000 (0:00:00.143) 0:00:11.416 ********** 2026-04-17 03:44:09.337328 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337334 | orchestrator | 2026-04-17 03:44:09.337340 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-17 03:44:09.337346 | orchestrator | Friday 17 April 2026 03:44:04 +0000 (0:00:00.345) 0:00:11.761 ********** 2026-04-17 03:44:09.337351 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:44:09.337357 | orchestrator | 2026-04-17 03:44:09.337363 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-17 03:44:09.337369 | orchestrator | Friday 17 April 2026 03:44:04 +0000 (0:00:00.149) 0:00:11.911 ********** 2026-04-17 03:44:09.337376 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba7178ba-163b-58b0-89b4-3a73c9468ec2'}}) 2026-04-17 03:44:09.337383 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}}) 2026-04-17 03:44:09.337390 | orchestrator | 2026-04-17 03:44:09.337397 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-17 03:44:09.337403 | orchestrator | Friday 17 April 2026 03:44:04 +0000 (0:00:00.166) 0:00:12.077 ********** 2026-04-17 03:44:09.337411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba7178ba-163b-58b0-89b4-3a73c9468ec2'}})  2026-04-17 03:44:09.337419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}})  2026-04-17 03:44:09.337426 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337433 | orchestrator | 2026-04-17 03:44:09.337439 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-17 03:44:09.337446 | orchestrator | Friday 17 April 2026 03:44:04 +0000 (0:00:00.152) 0:00:12.229 ********** 2026-04-17 03:44:09.337453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba7178ba-163b-58b0-89b4-3a73c9468ec2'}})  2026-04-17 03:44:09.337459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}})  2026-04-17 03:44:09.337466 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337473 | orchestrator | 2026-04-17 03:44:09.337479 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-17 03:44:09.337486 | orchestrator | Friday 17 April 2026 03:44:04 +0000 (0:00:00.156) 0:00:12.385 ********** 2026-04-17 03:44:09.337492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba7178ba-163b-58b0-89b4-3a73c9468ec2'}})  2026-04-17 03:44:09.337512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}})  2026-04-17 03:44:09.337519 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337526 | orchestrator | 2026-04-17 03:44:09.337533 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-17 03:44:09.337541 | orchestrator | Friday 17 April 2026 03:44:05 +0000 (0:00:00.157) 0:00:12.543 ********** 2026-04-17 03:44:09.337547 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:44:09.337554 | orchestrator | 2026-04-17 03:44:09.337561 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-17 03:44:09.337568 | orchestrator | Friday 17 April 2026 03:44:05 +0000 (0:00:00.150) 0:00:12.693 ********** 2026-04-17 03:44:09.337574 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:44:09.337581 | orchestrator | 2026-04-17 03:44:09.337591 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-17 03:44:09.337598 | orchestrator | Friday 17 April 2026 03:44:05 +0000 (0:00:00.153) 0:00:12.847 ********** 2026-04-17 03:44:09.337605 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337612 | orchestrator | 2026-04-17 03:44:09.337619 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-17 03:44:09.337631 | orchestrator | Friday 17 April 2026 03:44:05 +0000 (0:00:00.156) 0:00:13.003 ********** 2026-04-17 03:44:09.337638 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337644 | orchestrator | 2026-04-17 03:44:09.337651 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-17 03:44:09.337658 | orchestrator | Friday 17 April 2026 03:44:05 +0000 (0:00:00.133) 0:00:13.137 ********** 2026-04-17 03:44:09.337664 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337671 | orchestrator | 2026-04-17 03:44:09.337678 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-17 03:44:09.337684 | orchestrator | Friday 17 April 2026 03:44:05 +0000 (0:00:00.145) 0:00:13.282 ********** 2026-04-17 03:44:09.337691 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 03:44:09.337698 | orchestrator |  "ceph_osd_devices": { 2026-04-17 03:44:09.337705 | orchestrator |  "sdb": { 2026-04-17 03:44:09.337725 | orchestrator |  "osd_lvm_uuid": "ba7178ba-163b-58b0-89b4-3a73c9468ec2" 2026-04-17 03:44:09.337741 | orchestrator |  }, 2026-04-17 03:44:09.337750 | orchestrator |  "sdc": { 2026-04-17 03:44:09.337760 | orchestrator |  "osd_lvm_uuid": "34b96a2b-74e9-5d3b-a409-9327cdd3ba08" 2026-04-17 03:44:09.337775 | orchestrator |  } 2026-04-17 03:44:09.337784 | orchestrator |  } 2026-04-17 03:44:09.337793 | orchestrator | } 2026-04-17 03:44:09.337802 | orchestrator | 2026-04-17 03:44:09.337811 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-17 03:44:09.337820 | orchestrator | Friday 17 April 2026 03:44:06 +0000 (0:00:00.371) 0:00:13.653 ********** 2026-04-17 03:44:09.337829 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337837 | orchestrator | 2026-04-17 03:44:09.337847 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-17 03:44:09.337855 | orchestrator | Friday 17 April 2026 03:44:06 +0000 (0:00:00.153) 0:00:13.807 ********** 2026-04-17 03:44:09.337864 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337873 | orchestrator | 2026-04-17 03:44:09.337882 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-17 03:44:09.337890 | orchestrator | Friday 17 April 2026 03:44:06 +0000 (0:00:00.136) 0:00:13.944 ********** 2026-04-17 03:44:09.337899 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:44:09.337908 | orchestrator | 2026-04-17 03:44:09.337917 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-17 03:44:09.337928 | orchestrator | Friday 17 April 2026 03:44:06 +0000 (0:00:00.144) 0:00:14.088 ********** 2026-04-17 03:44:09.337937 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 03:44:09.337946 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-17 03:44:09.337956 | orchestrator |  "ceph_osd_devices": { 2026-04-17 03:44:09.337966 | orchestrator |  "sdb": { 2026-04-17 03:44:09.337976 | orchestrator |  "osd_lvm_uuid": "ba7178ba-163b-58b0-89b4-3a73c9468ec2" 2026-04-17 03:44:09.338010 | orchestrator |  }, 2026-04-17 03:44:09.338068 | orchestrator |  "sdc": { 2026-04-17 03:44:09.338077 | orchestrator |  "osd_lvm_uuid": "34b96a2b-74e9-5d3b-a409-9327cdd3ba08" 2026-04-17 03:44:09.338088 | orchestrator |  } 2026-04-17 03:44:09.338094 | orchestrator |  }, 2026-04-17 03:44:09.338100 | orchestrator |  "lvm_volumes": [ 2026-04-17 03:44:09.338106 | orchestrator |  { 2026-04-17 03:44:09.338112 | orchestrator |  "data": "osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2", 2026-04-17 03:44:09.338118 | orchestrator |  "data_vg": "ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2" 2026-04-17 03:44:09.338124 | orchestrator |  }, 2026-04-17 03:44:09.338129 | orchestrator |  { 2026-04-17 03:44:09.338135 | orchestrator |  "data": "osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08", 2026-04-17 03:44:09.338141 | orchestrator |  "data_vg": "ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08" 2026-04-17 03:44:09.338146 | orchestrator |  } 2026-04-17 03:44:09.338160 | orchestrator |  ] 2026-04-17 03:44:09.338166 | orchestrator |  } 2026-04-17 03:44:09.338171 | orchestrator | } 2026-04-17 03:44:09.338177 | orchestrator | 2026-04-17 03:44:09.338183 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-17 03:44:09.338189 | orchestrator | Friday 17 April 2026 03:44:06 +0000 (0:00:00.244) 0:00:14.333 ********** 2026-04-17 03:44:09.338194 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 03:44:09.338200 | orchestrator | 2026-04-17 03:44:09.338206 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-17 03:44:09.338211 | orchestrator | 2026-04-17 03:44:09.338217 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 03:44:09.338223 | orchestrator | Friday 17 April 2026 03:44:08 +0000 (0:00:01.956) 0:00:16.289 ********** 2026-04-17 03:44:09.338228 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-17 03:44:09.338234 | orchestrator | 2026-04-17 03:44:09.338240 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 03:44:09.338245 | orchestrator | Friday 17 April 2026 03:44:09 +0000 (0:00:00.254) 0:00:16.544 ********** 2026-04-17 03:44:09.338251 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:44:09.338257 | orchestrator | 2026-04-17 03:44:09.338270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.386334 | orchestrator | Friday 17 April 2026 03:44:09 +0000 (0:00:00.236) 0:00:16.780 ********** 2026-04-17 03:44:18.386472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-17 03:44:18.386488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-17 03:44:18.386497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-17 03:44:18.386506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-17 03:44:18.386539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-17 03:44:18.386547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-17 03:44:18.386555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-17 03:44:18.386563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-17 03:44:18.386571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-17 03:44:18.386583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-17 03:44:18.386596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-17 03:44:18.386610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-17 03:44:18.386622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-17 03:44:18.386634 | orchestrator | 2026-04-17 03:44:18.386647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.386660 | orchestrator | Friday 17 April 2026 03:44:09 +0000 (0:00:00.568) 0:00:17.349 ********** 2026-04-17 03:44:18.386673 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.386686 | orchestrator | 2026-04-17 03:44:18.386699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.386712 | orchestrator | Friday 17 April 2026 03:44:10 +0000 (0:00:00.215) 0:00:17.564 ********** 2026-04-17 03:44:18.386725 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.386738 | orchestrator | 2026-04-17 03:44:18.386750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.386763 | orchestrator | Friday 17 April 2026 03:44:10 +0000 (0:00:00.226) 0:00:17.790 ********** 2026-04-17 03:44:18.386777 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.386816 | orchestrator | 2026-04-17 03:44:18.386830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.386844 | orchestrator | Friday 17 April 2026 03:44:10 +0000 (0:00:00.258) 0:00:18.049 ********** 2026-04-17 03:44:18.386857 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.386871 | orchestrator | 2026-04-17 03:44:18.386886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.386899 | orchestrator | Friday 17 April 2026 03:44:10 +0000 (0:00:00.224) 0:00:18.273 ********** 2026-04-17 03:44:18.386907 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.386916 | orchestrator | 2026-04-17 03:44:18.386925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.386934 | orchestrator | Friday 17 April 2026 03:44:11 +0000 (0:00:00.203) 0:00:18.477 ********** 2026-04-17 03:44:18.386943 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.386951 | orchestrator | 2026-04-17 03:44:18.386960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.386969 | orchestrator | Friday 17 April 2026 03:44:11 +0000 (0:00:00.218) 0:00:18.695 ********** 2026-04-17 03:44:18.386977 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.386986 | orchestrator | 2026-04-17 03:44:18.387027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.387039 | orchestrator | Friday 17 April 2026 03:44:11 +0000 (0:00:00.227) 0:00:18.923 ********** 2026-04-17 03:44:18.387061 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387076 | orchestrator | 2026-04-17 03:44:18.387088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.387101 | orchestrator | Friday 17 April 2026 03:44:11 +0000 (0:00:00.201) 0:00:19.125 ********** 2026-04-17 03:44:18.387114 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6) 2026-04-17 03:44:18.387129 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6) 2026-04-17 03:44:18.387141 | orchestrator | 2026-04-17 03:44:18.387155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.387167 | orchestrator | Friday 17 April 2026 03:44:12 +0000 (0:00:00.685) 0:00:19.810 ********** 2026-04-17 03:44:18.387179 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4) 2026-04-17 03:44:18.387193 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4) 2026-04-17 03:44:18.387205 | orchestrator | 2026-04-17 03:44:18.387218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.387227 | orchestrator | Friday 17 April 2026 03:44:13 +0000 (0:00:00.677) 0:00:20.487 ********** 2026-04-17 03:44:18.387235 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96) 2026-04-17 03:44:18.387243 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96) 2026-04-17 03:44:18.387251 | orchestrator | 2026-04-17 03:44:18.387258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.387290 | orchestrator | Friday 17 April 2026 03:44:13 +0000 (0:00:00.933) 0:00:21.420 ********** 2026-04-17 03:44:18.387303 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784) 2026-04-17 03:44:18.387326 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784) 2026-04-17 03:44:18.387338 | orchestrator | 2026-04-17 03:44:18.387351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:18.387363 | orchestrator | Friday 17 April 2026 03:44:14 +0000 (0:00:00.456) 0:00:21.877 ********** 2026-04-17 03:44:18.387388 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 03:44:18.387401 | orchestrator | 2026-04-17 03:44:18.387414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387439 | orchestrator | Friday 17 April 2026 03:44:14 +0000 (0:00:00.346) 0:00:22.223 ********** 2026-04-17 03:44:18.387452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-17 03:44:18.387464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-17 03:44:18.387476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-17 03:44:18.387488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-17 03:44:18.387500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-17 03:44:18.387512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-17 03:44:18.387525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-17 03:44:18.387539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-17 03:44:18.387552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-17 03:44:18.387565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-17 03:44:18.387578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-17 03:44:18.387591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-17 03:44:18.387604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-17 03:44:18.387618 | orchestrator | 2026-04-17 03:44:18.387632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387645 | orchestrator | Friday 17 April 2026 03:44:15 +0000 (0:00:00.391) 0:00:22.614 ********** 2026-04-17 03:44:18.387658 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387671 | orchestrator | 2026-04-17 03:44:18.387685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387693 | orchestrator | Friday 17 April 2026 03:44:15 +0000 (0:00:00.210) 0:00:22.825 ********** 2026-04-17 03:44:18.387701 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387709 | orchestrator | 2026-04-17 03:44:18.387716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387724 | orchestrator | Friday 17 April 2026 03:44:15 +0000 (0:00:00.218) 0:00:23.043 ********** 2026-04-17 03:44:18.387732 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387739 | orchestrator | 2026-04-17 03:44:18.387747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387754 | orchestrator | Friday 17 April 2026 03:44:15 +0000 (0:00:00.195) 0:00:23.239 ********** 2026-04-17 03:44:18.387762 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387770 | orchestrator | 2026-04-17 03:44:18.387778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387785 | orchestrator | Friday 17 April 2026 03:44:16 +0000 (0:00:00.222) 0:00:23.462 ********** 2026-04-17 03:44:18.387793 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387801 | orchestrator | 2026-04-17 03:44:18.387808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387816 | orchestrator | Friday 17 April 2026 03:44:16 +0000 (0:00:00.236) 0:00:23.698 ********** 2026-04-17 03:44:18.387823 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387833 | orchestrator | 2026-04-17 03:44:18.387845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387864 | orchestrator | Friday 17 April 2026 03:44:16 +0000 (0:00:00.214) 0:00:23.913 ********** 2026-04-17 03:44:18.387880 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387892 | orchestrator | 2026-04-17 03:44:18.387905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387928 | orchestrator | Friday 17 April 2026 03:44:16 +0000 (0:00:00.207) 0:00:24.121 ********** 2026-04-17 03:44:18.387943 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:18.387956 | orchestrator | 2026-04-17 03:44:18.387969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.387983 | orchestrator | Friday 17 April 2026 03:44:17 +0000 (0:00:00.774) 0:00:24.896 ********** 2026-04-17 03:44:18.388083 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-17 03:44:18.388093 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-17 03:44:18.388101 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-17 03:44:18.388109 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-17 03:44:18.388117 | orchestrator | 2026-04-17 03:44:18.388125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:18.388133 | orchestrator | Friday 17 April 2026 03:44:18 +0000 (0:00:00.712) 0:00:25.608 ********** 2026-04-17 03:44:18.388141 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654532 | orchestrator | 2026-04-17 03:44:24.654609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:24.654616 | orchestrator | Friday 17 April 2026 03:44:18 +0000 (0:00:00.223) 0:00:25.831 ********** 2026-04-17 03:44:24.654621 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654625 | orchestrator | 2026-04-17 03:44:24.654629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:24.654634 | orchestrator | Friday 17 April 2026 03:44:18 +0000 (0:00:00.221) 0:00:26.053 ********** 2026-04-17 03:44:24.654638 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654642 | orchestrator | 2026-04-17 03:44:24.654649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:24.654670 | orchestrator | Friday 17 April 2026 03:44:18 +0000 (0:00:00.233) 0:00:26.286 ********** 2026-04-17 03:44:24.654678 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654684 | orchestrator | 2026-04-17 03:44:24.654690 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-17 03:44:24.654696 | orchestrator | Friday 17 April 2026 03:44:19 +0000 (0:00:00.221) 0:00:26.508 ********** 2026-04-17 03:44:24.654702 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-17 03:44:24.654708 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-17 03:44:24.654714 | orchestrator | 2026-04-17 03:44:24.654720 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-17 03:44:24.654727 | orchestrator | Friday 17 April 2026 03:44:19 +0000 (0:00:00.197) 0:00:26.705 ********** 2026-04-17 03:44:24.654733 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654739 | orchestrator | 2026-04-17 03:44:24.654745 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-17 03:44:24.654752 | orchestrator | Friday 17 April 2026 03:44:19 +0000 (0:00:00.149) 0:00:26.855 ********** 2026-04-17 03:44:24.654758 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654765 | orchestrator | 2026-04-17 03:44:24.654771 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-17 03:44:24.654777 | orchestrator | Friday 17 April 2026 03:44:19 +0000 (0:00:00.142) 0:00:26.997 ********** 2026-04-17 03:44:24.654783 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654789 | orchestrator | 2026-04-17 03:44:24.654796 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-17 03:44:24.654802 | orchestrator | Friday 17 April 2026 03:44:19 +0000 (0:00:00.142) 0:00:27.140 ********** 2026-04-17 03:44:24.654808 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:44:24.654816 | orchestrator | 2026-04-17 03:44:24.654821 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-17 03:44:24.654825 | orchestrator | Friday 17 April 2026 03:44:19 +0000 (0:00:00.141) 0:00:27.281 ********** 2026-04-17 03:44:24.654829 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2b01680-30d5-524c-a810-0db40fd977fd'}}) 2026-04-17 03:44:24.654850 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1504e56e-19fb-5fe8-bf47-cc017f2297d0'}}) 2026-04-17 03:44:24.654855 | orchestrator | 2026-04-17 03:44:24.654859 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-17 03:44:24.654862 | orchestrator | Friday 17 April 2026 03:44:20 +0000 (0:00:00.171) 0:00:27.453 ********** 2026-04-17 03:44:24.654867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2b01680-30d5-524c-a810-0db40fd977fd'}})  2026-04-17 03:44:24.654872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1504e56e-19fb-5fe8-bf47-cc017f2297d0'}})  2026-04-17 03:44:24.654876 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654880 | orchestrator | 2026-04-17 03:44:24.654884 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-17 03:44:24.654887 | orchestrator | Friday 17 April 2026 03:44:20 +0000 (0:00:00.387) 0:00:27.840 ********** 2026-04-17 03:44:24.654891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2b01680-30d5-524c-a810-0db40fd977fd'}})  2026-04-17 03:44:24.654895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1504e56e-19fb-5fe8-bf47-cc017f2297d0'}})  2026-04-17 03:44:24.654899 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654903 | orchestrator | 2026-04-17 03:44:24.654906 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-17 03:44:24.654910 | orchestrator | Friday 17 April 2026 03:44:20 +0000 (0:00:00.177) 0:00:28.018 ********** 2026-04-17 03:44:24.654914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2b01680-30d5-524c-a810-0db40fd977fd'}})  2026-04-17 03:44:24.654918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1504e56e-19fb-5fe8-bf47-cc017f2297d0'}})  2026-04-17 03:44:24.654922 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654926 | orchestrator | 2026-04-17 03:44:24.654929 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-17 03:44:24.654933 | orchestrator | Friday 17 April 2026 03:44:20 +0000 (0:00:00.164) 0:00:28.183 ********** 2026-04-17 03:44:24.654937 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:44:24.654940 | orchestrator | 2026-04-17 03:44:24.654944 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-17 03:44:24.654948 | orchestrator | Friday 17 April 2026 03:44:20 +0000 (0:00:00.150) 0:00:28.333 ********** 2026-04-17 03:44:24.654952 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:44:24.654955 | orchestrator | 2026-04-17 03:44:24.654959 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-17 03:44:24.654963 | orchestrator | Friday 17 April 2026 03:44:21 +0000 (0:00:00.146) 0:00:28.480 ********** 2026-04-17 03:44:24.654979 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.654986 | orchestrator | 2026-04-17 03:44:24.655010 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-17 03:44:24.655017 | orchestrator | Friday 17 April 2026 03:44:21 +0000 (0:00:00.159) 0:00:28.639 ********** 2026-04-17 03:44:24.655022 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.655028 | orchestrator | 2026-04-17 03:44:24.655034 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-17 03:44:24.655040 | orchestrator | Friday 17 April 2026 03:44:21 +0000 (0:00:00.135) 0:00:28.775 ********** 2026-04-17 03:44:24.655046 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.655052 | orchestrator | 2026-04-17 03:44:24.655058 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-17 03:44:24.655070 | orchestrator | Friday 17 April 2026 03:44:21 +0000 (0:00:00.141) 0:00:28.916 ********** 2026-04-17 03:44:24.655084 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 03:44:24.655090 | orchestrator |  "ceph_osd_devices": { 2026-04-17 03:44:24.655106 | orchestrator |  "sdb": { 2026-04-17 03:44:24.655111 | orchestrator |  "osd_lvm_uuid": "b2b01680-30d5-524c-a810-0db40fd977fd" 2026-04-17 03:44:24.655116 | orchestrator |  }, 2026-04-17 03:44:24.655121 | orchestrator |  "sdc": { 2026-04-17 03:44:24.655125 | orchestrator |  "osd_lvm_uuid": "1504e56e-19fb-5fe8-bf47-cc017f2297d0" 2026-04-17 03:44:24.655130 | orchestrator |  } 2026-04-17 03:44:24.655134 | orchestrator |  } 2026-04-17 03:44:24.655139 | orchestrator | } 2026-04-17 03:44:24.655144 | orchestrator | 2026-04-17 03:44:24.655148 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-17 03:44:24.655153 | orchestrator | Friday 17 April 2026 03:44:21 +0000 (0:00:00.151) 0:00:29.068 ********** 2026-04-17 03:44:24.655157 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.655161 | orchestrator | 2026-04-17 03:44:24.655166 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-17 03:44:24.655170 | orchestrator | Friday 17 April 2026 03:44:21 +0000 (0:00:00.136) 0:00:29.205 ********** 2026-04-17 03:44:24.655174 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.655179 | orchestrator | 2026-04-17 03:44:24.655183 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-17 03:44:24.655188 | orchestrator | Friday 17 April 2026 03:44:21 +0000 (0:00:00.137) 0:00:29.342 ********** 2026-04-17 03:44:24.655192 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:44:24.655196 | orchestrator | 2026-04-17 03:44:24.655201 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-17 03:44:24.655205 | orchestrator | Friday 17 April 2026 03:44:22 +0000 (0:00:00.138) 0:00:29.480 ********** 2026-04-17 03:44:24.655209 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 03:44:24.655213 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-17 03:44:24.655219 | orchestrator |  "ceph_osd_devices": { 2026-04-17 03:44:24.655225 | orchestrator |  "sdb": { 2026-04-17 03:44:24.655231 | orchestrator |  "osd_lvm_uuid": "b2b01680-30d5-524c-a810-0db40fd977fd" 2026-04-17 03:44:24.655241 | orchestrator |  }, 2026-04-17 03:44:24.655249 | orchestrator |  "sdc": { 2026-04-17 03:44:24.655254 | orchestrator |  "osd_lvm_uuid": "1504e56e-19fb-5fe8-bf47-cc017f2297d0" 2026-04-17 03:44:24.655260 | orchestrator |  } 2026-04-17 03:44:24.655266 | orchestrator |  }, 2026-04-17 03:44:24.655272 | orchestrator |  "lvm_volumes": [ 2026-04-17 03:44:24.655278 | orchestrator |  { 2026-04-17 03:44:24.655284 | orchestrator |  "data": "osd-block-b2b01680-30d5-524c-a810-0db40fd977fd", 2026-04-17 03:44:24.655290 | orchestrator |  "data_vg": "ceph-b2b01680-30d5-524c-a810-0db40fd977fd" 2026-04-17 03:44:24.655295 | orchestrator |  }, 2026-04-17 03:44:24.655300 | orchestrator |  { 2026-04-17 03:44:24.655305 | orchestrator |  "data": "osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0", 2026-04-17 03:44:24.655311 | orchestrator |  "data_vg": "ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0" 2026-04-17 03:44:24.655317 | orchestrator |  } 2026-04-17 03:44:24.655324 | orchestrator |  ] 2026-04-17 03:44:24.655330 | orchestrator |  } 2026-04-17 03:44:24.655336 | orchestrator | } 2026-04-17 03:44:24.655343 | orchestrator | 2026-04-17 03:44:24.655349 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-17 03:44:24.655355 | orchestrator | Friday 17 April 2026 03:44:22 +0000 (0:00:00.442) 0:00:29.923 ********** 2026-04-17 03:44:24.655362 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-17 03:44:24.655368 | orchestrator | 2026-04-17 03:44:24.655375 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-17 03:44:24.655381 | orchestrator | 2026-04-17 03:44:24.655387 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 03:44:24.655393 | orchestrator | Friday 17 April 2026 03:44:23 +0000 (0:00:01.213) 0:00:31.137 ********** 2026-04-17 03:44:24.655399 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-17 03:44:24.655413 | orchestrator | 2026-04-17 03:44:24.655421 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 03:44:24.655429 | orchestrator | Friday 17 April 2026 03:44:23 +0000 (0:00:00.265) 0:00:31.402 ********** 2026-04-17 03:44:24.655435 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:44:24.655441 | orchestrator | 2026-04-17 03:44:24.655446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:24.655453 | orchestrator | Friday 17 April 2026 03:44:24 +0000 (0:00:00.258) 0:00:31.661 ********** 2026-04-17 03:44:24.655459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-17 03:44:24.655465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-17 03:44:24.655471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-17 03:44:24.655477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-17 03:44:24.655483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-17 03:44:24.655496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-17 03:44:34.132280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-17 03:44:34.132405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-17 03:44:34.132429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-17 03:44:34.132445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-17 03:44:34.132460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-17 03:44:34.132495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-17 03:44:34.132513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-17 03:44:34.132529 | orchestrator | 2026-04-17 03:44:34.132545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132562 | orchestrator | Friday 17 April 2026 03:44:24 +0000 (0:00:00.432) 0:00:32.094 ********** 2026-04-17 03:44:34.132579 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.132597 | orchestrator | 2026-04-17 03:44:34.132612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132627 | orchestrator | Friday 17 April 2026 03:44:24 +0000 (0:00:00.228) 0:00:32.323 ********** 2026-04-17 03:44:34.132636 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.132645 | orchestrator | 2026-04-17 03:44:34.132654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132663 | orchestrator | Friday 17 April 2026 03:44:25 +0000 (0:00:00.219) 0:00:32.542 ********** 2026-04-17 03:44:34.132671 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.132680 | orchestrator | 2026-04-17 03:44:34.132689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132697 | orchestrator | Friday 17 April 2026 03:44:25 +0000 (0:00:00.198) 0:00:32.741 ********** 2026-04-17 03:44:34.132706 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.132714 | orchestrator | 2026-04-17 03:44:34.132723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132732 | orchestrator | Friday 17 April 2026 03:44:25 +0000 (0:00:00.697) 0:00:33.439 ********** 2026-04-17 03:44:34.132741 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.132750 | orchestrator | 2026-04-17 03:44:34.132759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132767 | orchestrator | Friday 17 April 2026 03:44:26 +0000 (0:00:00.212) 0:00:33.651 ********** 2026-04-17 03:44:34.132776 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.132809 | orchestrator | 2026-04-17 03:44:34.132821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132831 | orchestrator | Friday 17 April 2026 03:44:26 +0000 (0:00:00.215) 0:00:33.866 ********** 2026-04-17 03:44:34.132841 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.132850 | orchestrator | 2026-04-17 03:44:34.132861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132870 | orchestrator | Friday 17 April 2026 03:44:26 +0000 (0:00:00.241) 0:00:34.108 ********** 2026-04-17 03:44:34.132880 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.132890 | orchestrator | 2026-04-17 03:44:34.132901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132910 | orchestrator | Friday 17 April 2026 03:44:26 +0000 (0:00:00.216) 0:00:34.325 ********** 2026-04-17 03:44:34.132920 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e) 2026-04-17 03:44:34.132932 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e) 2026-04-17 03:44:34.132942 | orchestrator | 2026-04-17 03:44:34.132952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.132960 | orchestrator | Friday 17 April 2026 03:44:27 +0000 (0:00:00.514) 0:00:34.839 ********** 2026-04-17 03:44:34.132969 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac) 2026-04-17 03:44:34.132977 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac) 2026-04-17 03:44:34.132986 | orchestrator | 2026-04-17 03:44:34.133025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.133036 | orchestrator | Friday 17 April 2026 03:44:27 +0000 (0:00:00.539) 0:00:35.378 ********** 2026-04-17 03:44:34.133044 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b) 2026-04-17 03:44:34.133053 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b) 2026-04-17 03:44:34.133062 | orchestrator | 2026-04-17 03:44:34.133070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.133079 | orchestrator | Friday 17 April 2026 03:44:28 +0000 (0:00:00.466) 0:00:35.845 ********** 2026-04-17 03:44:34.133088 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134) 2026-04-17 03:44:34.133097 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134) 2026-04-17 03:44:34.133106 | orchestrator | 2026-04-17 03:44:34.133115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:44:34.133124 | orchestrator | Friday 17 April 2026 03:44:28 +0000 (0:00:00.465) 0:00:36.310 ********** 2026-04-17 03:44:34.133132 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 03:44:34.133141 | orchestrator | 2026-04-17 03:44:34.133150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133176 | orchestrator | Friday 17 April 2026 03:44:29 +0000 (0:00:00.352) 0:00:36.663 ********** 2026-04-17 03:44:34.133185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-17 03:44:34.133193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-17 03:44:34.133202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-17 03:44:34.133211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-17 03:44:34.133225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-17 03:44:34.133234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-17 03:44:34.133243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-17 03:44:34.133259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-17 03:44:34.133268 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-17 03:44:34.133276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-17 03:44:34.133285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-17 03:44:34.133293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-17 03:44:34.133302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-17 03:44:34.133311 | orchestrator | 2026-04-17 03:44:34.133319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133328 | orchestrator | Friday 17 April 2026 03:44:29 +0000 (0:00:00.697) 0:00:37.360 ********** 2026-04-17 03:44:34.133337 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133345 | orchestrator | 2026-04-17 03:44:34.133354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133363 | orchestrator | Friday 17 April 2026 03:44:30 +0000 (0:00:00.222) 0:00:37.583 ********** 2026-04-17 03:44:34.133371 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133380 | orchestrator | 2026-04-17 03:44:34.133389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133397 | orchestrator | Friday 17 April 2026 03:44:30 +0000 (0:00:00.230) 0:00:37.813 ********** 2026-04-17 03:44:34.133406 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133415 | orchestrator | 2026-04-17 03:44:34.133423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133432 | orchestrator | Friday 17 April 2026 03:44:30 +0000 (0:00:00.240) 0:00:38.054 ********** 2026-04-17 03:44:34.133441 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133449 | orchestrator | 2026-04-17 03:44:34.133462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133476 | orchestrator | Friday 17 April 2026 03:44:30 +0000 (0:00:00.219) 0:00:38.274 ********** 2026-04-17 03:44:34.133491 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133506 | orchestrator | 2026-04-17 03:44:34.133521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133535 | orchestrator | Friday 17 April 2026 03:44:31 +0000 (0:00:00.226) 0:00:38.500 ********** 2026-04-17 03:44:34.133550 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133565 | orchestrator | 2026-04-17 03:44:34.133575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133583 | orchestrator | Friday 17 April 2026 03:44:31 +0000 (0:00:00.220) 0:00:38.720 ********** 2026-04-17 03:44:34.133592 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133601 | orchestrator | 2026-04-17 03:44:34.133609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133618 | orchestrator | Friday 17 April 2026 03:44:31 +0000 (0:00:00.213) 0:00:38.934 ********** 2026-04-17 03:44:34.133626 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133635 | orchestrator | 2026-04-17 03:44:34.133644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133652 | orchestrator | Friday 17 April 2026 03:44:31 +0000 (0:00:00.222) 0:00:39.156 ********** 2026-04-17 03:44:34.133661 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-17 03:44:34.133670 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-17 03:44:34.133679 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-17 03:44:34.133688 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-17 03:44:34.133696 | orchestrator | 2026-04-17 03:44:34.133705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133714 | orchestrator | Friday 17 April 2026 03:44:32 +0000 (0:00:00.887) 0:00:40.044 ********** 2026-04-17 03:44:34.133732 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133740 | orchestrator | 2026-04-17 03:44:34.133749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133758 | orchestrator | Friday 17 April 2026 03:44:32 +0000 (0:00:00.215) 0:00:40.259 ********** 2026-04-17 03:44:34.133766 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133775 | orchestrator | 2026-04-17 03:44:34.133783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133792 | orchestrator | Friday 17 April 2026 03:44:33 +0000 (0:00:00.267) 0:00:40.527 ********** 2026-04-17 03:44:34.133801 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133809 | orchestrator | 2026-04-17 03:44:34.133818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:44:34.133827 | orchestrator | Friday 17 April 2026 03:44:33 +0000 (0:00:00.816) 0:00:41.343 ********** 2026-04-17 03:44:34.133835 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:34.133844 | orchestrator | 2026-04-17 03:44:34.133858 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-17 03:44:38.555259 | orchestrator | Friday 17 April 2026 03:44:34 +0000 (0:00:00.229) 0:00:41.573 ********** 2026-04-17 03:44:38.555374 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-17 03:44:38.555390 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-17 03:44:38.555403 | orchestrator | 2026-04-17 03:44:38.555415 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-17 03:44:38.555426 | orchestrator | Friday 17 April 2026 03:44:34 +0000 (0:00:00.192) 0:00:41.766 ********** 2026-04-17 03:44:38.555438 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.555450 | orchestrator | 2026-04-17 03:44:38.555483 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-17 03:44:38.555503 | orchestrator | Friday 17 April 2026 03:44:34 +0000 (0:00:00.174) 0:00:41.941 ********** 2026-04-17 03:44:38.555521 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.555539 | orchestrator | 2026-04-17 03:44:38.555584 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-17 03:44:38.555603 | orchestrator | Friday 17 April 2026 03:44:34 +0000 (0:00:00.145) 0:00:42.086 ********** 2026-04-17 03:44:38.555632 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.555649 | orchestrator | 2026-04-17 03:44:38.555663 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-17 03:44:38.555678 | orchestrator | Friday 17 April 2026 03:44:34 +0000 (0:00:00.147) 0:00:42.234 ********** 2026-04-17 03:44:38.555695 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:44:38.555711 | orchestrator | 2026-04-17 03:44:38.555726 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-17 03:44:38.555741 | orchestrator | Friday 17 April 2026 03:44:34 +0000 (0:00:00.150) 0:00:42.385 ********** 2026-04-17 03:44:38.555758 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '690571ed-11b8-555e-b420-011f2882a19f'}}) 2026-04-17 03:44:38.555774 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '58d5b32d-9713-5f24-a4e2-aea701c9df8d'}}) 2026-04-17 03:44:38.555789 | orchestrator | 2026-04-17 03:44:38.555806 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-17 03:44:38.555821 | orchestrator | Friday 17 April 2026 03:44:35 +0000 (0:00:00.174) 0:00:42.559 ********** 2026-04-17 03:44:38.555836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '690571ed-11b8-555e-b420-011f2882a19f'}})  2026-04-17 03:44:38.555853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '58d5b32d-9713-5f24-a4e2-aea701c9df8d'}})  2026-04-17 03:44:38.555869 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.555884 | orchestrator | 2026-04-17 03:44:38.555900 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-17 03:44:38.555950 | orchestrator | Friday 17 April 2026 03:44:35 +0000 (0:00:00.178) 0:00:42.738 ********** 2026-04-17 03:44:38.555966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '690571ed-11b8-555e-b420-011f2882a19f'}})  2026-04-17 03:44:38.555981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '58d5b32d-9713-5f24-a4e2-aea701c9df8d'}})  2026-04-17 03:44:38.556089 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.556112 | orchestrator | 2026-04-17 03:44:38.556130 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-17 03:44:38.556146 | orchestrator | Friday 17 April 2026 03:44:35 +0000 (0:00:00.167) 0:00:42.905 ********** 2026-04-17 03:44:38.556162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '690571ed-11b8-555e-b420-011f2882a19f'}})  2026-04-17 03:44:38.556177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '58d5b32d-9713-5f24-a4e2-aea701c9df8d'}})  2026-04-17 03:44:38.556193 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.556209 | orchestrator | 2026-04-17 03:44:38.556224 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-17 03:44:38.556239 | orchestrator | Friday 17 April 2026 03:44:35 +0000 (0:00:00.167) 0:00:43.072 ********** 2026-04-17 03:44:38.556255 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:44:38.556287 | orchestrator | 2026-04-17 03:44:38.556303 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-17 03:44:38.556320 | orchestrator | Friday 17 April 2026 03:44:35 +0000 (0:00:00.162) 0:00:43.234 ********** 2026-04-17 03:44:38.556335 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:44:38.556350 | orchestrator | 2026-04-17 03:44:38.556366 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-17 03:44:38.556382 | orchestrator | Friday 17 April 2026 03:44:36 +0000 (0:00:00.366) 0:00:43.601 ********** 2026-04-17 03:44:38.556398 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.556414 | orchestrator | 2026-04-17 03:44:38.556430 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-17 03:44:38.556446 | orchestrator | Friday 17 April 2026 03:44:36 +0000 (0:00:00.145) 0:00:43.746 ********** 2026-04-17 03:44:38.556462 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.556477 | orchestrator | 2026-04-17 03:44:38.556494 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-17 03:44:38.556510 | orchestrator | Friday 17 April 2026 03:44:36 +0000 (0:00:00.155) 0:00:43.901 ********** 2026-04-17 03:44:38.556526 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.556542 | orchestrator | 2026-04-17 03:44:38.556552 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-17 03:44:38.556562 | orchestrator | Friday 17 April 2026 03:44:36 +0000 (0:00:00.154) 0:00:44.056 ********** 2026-04-17 03:44:38.556571 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 03:44:38.556581 | orchestrator |  "ceph_osd_devices": { 2026-04-17 03:44:38.556591 | orchestrator |  "sdb": { 2026-04-17 03:44:38.556628 | orchestrator |  "osd_lvm_uuid": "690571ed-11b8-555e-b420-011f2882a19f" 2026-04-17 03:44:38.556638 | orchestrator |  }, 2026-04-17 03:44:38.556648 | orchestrator |  "sdc": { 2026-04-17 03:44:38.556658 | orchestrator |  "osd_lvm_uuid": "58d5b32d-9713-5f24-a4e2-aea701c9df8d" 2026-04-17 03:44:38.556668 | orchestrator |  } 2026-04-17 03:44:38.556680 | orchestrator |  } 2026-04-17 03:44:38.556697 | orchestrator | } 2026-04-17 03:44:38.556714 | orchestrator | 2026-04-17 03:44:38.556743 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-17 03:44:38.556761 | orchestrator | Friday 17 April 2026 03:44:36 +0000 (0:00:00.159) 0:00:44.215 ********** 2026-04-17 03:44:38.556794 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.556810 | orchestrator | 2026-04-17 03:44:38.556824 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-17 03:44:38.556848 | orchestrator | Friday 17 April 2026 03:44:36 +0000 (0:00:00.149) 0:00:44.365 ********** 2026-04-17 03:44:38.556858 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.556867 | orchestrator | 2026-04-17 03:44:38.556877 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-17 03:44:38.556886 | orchestrator | Friday 17 April 2026 03:44:37 +0000 (0:00:00.154) 0:00:44.520 ********** 2026-04-17 03:44:38.556896 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:44:38.556905 | orchestrator | 2026-04-17 03:44:38.556915 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-17 03:44:38.556924 | orchestrator | Friday 17 April 2026 03:44:37 +0000 (0:00:00.139) 0:00:44.659 ********** 2026-04-17 03:44:38.556934 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 03:44:38.556944 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-17 03:44:38.556954 | orchestrator |  "ceph_osd_devices": { 2026-04-17 03:44:38.556964 | orchestrator |  "sdb": { 2026-04-17 03:44:38.556974 | orchestrator |  "osd_lvm_uuid": "690571ed-11b8-555e-b420-011f2882a19f" 2026-04-17 03:44:38.556983 | orchestrator |  }, 2026-04-17 03:44:38.556993 | orchestrator |  "sdc": { 2026-04-17 03:44:38.557035 | orchestrator |  "osd_lvm_uuid": "58d5b32d-9713-5f24-a4e2-aea701c9df8d" 2026-04-17 03:44:38.557045 | orchestrator |  } 2026-04-17 03:44:38.557055 | orchestrator |  }, 2026-04-17 03:44:38.557065 | orchestrator |  "lvm_volumes": [ 2026-04-17 03:44:38.557075 | orchestrator |  { 2026-04-17 03:44:38.557085 | orchestrator |  "data": "osd-block-690571ed-11b8-555e-b420-011f2882a19f", 2026-04-17 03:44:38.557095 | orchestrator |  "data_vg": "ceph-690571ed-11b8-555e-b420-011f2882a19f" 2026-04-17 03:44:38.557105 | orchestrator |  }, 2026-04-17 03:44:38.557114 | orchestrator |  { 2026-04-17 03:44:38.557124 | orchestrator |  "data": "osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d", 2026-04-17 03:44:38.557134 | orchestrator |  "data_vg": "ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d" 2026-04-17 03:44:38.557144 | orchestrator |  } 2026-04-17 03:44:38.557153 | orchestrator |  ] 2026-04-17 03:44:38.557163 | orchestrator |  } 2026-04-17 03:44:38.557173 | orchestrator | } 2026-04-17 03:44:38.557183 | orchestrator | 2026-04-17 03:44:38.557193 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-17 03:44:38.557202 | orchestrator | Friday 17 April 2026 03:44:37 +0000 (0:00:00.258) 0:00:44.918 ********** 2026-04-17 03:44:38.557212 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-17 03:44:38.557221 | orchestrator | 2026-04-17 03:44:38.557231 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:44:38.557241 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 03:44:38.557252 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 03:44:38.557262 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 03:44:38.557272 | orchestrator | 2026-04-17 03:44:38.557282 | orchestrator | 2026-04-17 03:44:38.557291 | orchestrator | 2026-04-17 03:44:38.557301 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:44:38.557311 | orchestrator | Friday 17 April 2026 03:44:38 +0000 (0:00:01.066) 0:00:45.984 ********** 2026-04-17 03:44:38.557320 | orchestrator | =============================================================================== 2026-04-17 03:44:38.557330 | orchestrator | Write configuration file ------------------------------------------------ 4.24s 2026-04-17 03:44:38.557340 | orchestrator | Add known partitions to the list of available block devices ------------- 1.54s 2026-04-17 03:44:38.557349 | orchestrator | Add known links to the list of available block devices ------------------ 1.52s 2026-04-17 03:44:38.557366 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-04-17 03:44:38.557375 | orchestrator | Print configuration data ------------------------------------------------ 0.95s 2026-04-17 03:44:38.557385 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-04-17 03:44:38.557395 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-04-17 03:44:38.557404 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-04-17 03:44:38.557414 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-04-17 03:44:38.557423 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2026-04-17 03:44:38.557433 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-17 03:44:38.557443 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-04-17 03:44:38.557453 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.72s 2026-04-17 03:44:38.557477 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-17 03:44:38.947481 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-17 03:44:38.947589 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-17 03:44:38.947605 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-17 03:44:38.947618 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-04-17 03:44:38.947653 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.68s 2026-04-17 03:44:38.947666 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-04-17 03:45:01.486865 | orchestrator | 2026-04-17 03:45:01 | INFO  | Task c5afae69-ee53-4582-9300-bd6096b10e65 (sync inventory) is running in background. Output coming soon. 2026-04-17 03:45:29.162263 | orchestrator | 2026-04-17 03:45:02 | INFO  | Starting group_vars file reorganization 2026-04-17 03:45:29.162390 | orchestrator | 2026-04-17 03:45:02 | INFO  | Moved 0 file(s) to their respective directories 2026-04-17 03:45:29.162405 | orchestrator | 2026-04-17 03:45:02 | INFO  | Group_vars file reorganization completed 2026-04-17 03:45:29.162413 | orchestrator | 2026-04-17 03:45:05 | INFO  | Starting variable preparation from inventory 2026-04-17 03:45:29.162420 | orchestrator | 2026-04-17 03:45:08 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-17 03:45:29.162427 | orchestrator | 2026-04-17 03:45:08 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-17 03:45:29.162434 | orchestrator | 2026-04-17 03:45:08 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-17 03:45:29.162441 | orchestrator | 2026-04-17 03:45:08 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-17 03:45:29.162447 | orchestrator | 2026-04-17 03:45:08 | INFO  | Variable preparation completed 2026-04-17 03:45:29.162454 | orchestrator | 2026-04-17 03:45:09 | INFO  | Starting inventory overwrite handling 2026-04-17 03:45:29.162460 | orchestrator | 2026-04-17 03:45:09 | INFO  | Handling group overwrites in 99-overwrite 2026-04-17 03:45:29.162466 | orchestrator | 2026-04-17 03:45:09 | INFO  | Removing group frr:children from 60-generic 2026-04-17 03:45:29.162472 | orchestrator | 2026-04-17 03:45:09 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-17 03:45:29.162479 | orchestrator | 2026-04-17 03:45:09 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-17 03:45:29.162485 | orchestrator | 2026-04-17 03:45:09 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-17 03:45:29.162513 | orchestrator | 2026-04-17 03:45:09 | INFO  | Handling group overwrites in 20-roles 2026-04-17 03:45:29.162520 | orchestrator | 2026-04-17 03:45:09 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-17 03:45:29.162526 | orchestrator | 2026-04-17 03:45:09 | INFO  | Removed 5 group(s) in total 2026-04-17 03:45:29.162532 | orchestrator | 2026-04-17 03:45:09 | INFO  | Inventory overwrite handling completed 2026-04-17 03:45:29.162539 | orchestrator | 2026-04-17 03:45:10 | INFO  | Starting merge of inventory files 2026-04-17 03:45:29.162545 | orchestrator | 2026-04-17 03:45:10 | INFO  | Inventory files merged successfully 2026-04-17 03:45:29.162551 | orchestrator | 2026-04-17 03:45:15 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-17 03:45:29.162557 | orchestrator | 2026-04-17 03:45:27 | INFO  | Successfully wrote ClusterShell configuration 2026-04-17 03:45:29.162563 | orchestrator | [master 639c05b] 2026-04-17-03-45 2026-04-17 03:45:29.162572 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-04-17 03:45:31.485649 | orchestrator | 2026-04-17 03:45:31 | INFO  | Task 864a6987-f06a-4e1a-9fef-26be9921f8d2 (ceph-create-lvm-devices) was prepared for execution. 2026-04-17 03:45:31.485772 | orchestrator | 2026-04-17 03:45:31 | INFO  | It takes a moment until task 864a6987-f06a-4e1a-9fef-26be9921f8d2 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-17 03:45:44.326371 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 03:45:44.326494 | orchestrator | 2.16.14 2026-04-17 03:45:44.326509 | orchestrator | 2026-04-17 03:45:44.326519 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-17 03:45:44.326529 | orchestrator | 2026-04-17 03:45:44.326597 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 03:45:44.326609 | orchestrator | Friday 17 April 2026 03:45:35 +0000 (0:00:00.311) 0:00:00.311 ********** 2026-04-17 03:45:44.326618 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 03:45:44.326627 | orchestrator | 2026-04-17 03:45:44.326635 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 03:45:44.326644 | orchestrator | Friday 17 April 2026 03:45:36 +0000 (0:00:00.270) 0:00:00.582 ********** 2026-04-17 03:45:44.326652 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:45:44.326661 | orchestrator | 2026-04-17 03:45:44.326669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.326677 | orchestrator | Friday 17 April 2026 03:45:36 +0000 (0:00:00.248) 0:00:00.831 ********** 2026-04-17 03:45:44.326685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-17 03:45:44.326693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-17 03:45:44.326702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-17 03:45:44.326724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-17 03:45:44.326732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-17 03:45:44.326740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-17 03:45:44.326748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-17 03:45:44.326756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-17 03:45:44.326764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-17 03:45:44.326772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-17 03:45:44.326780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-17 03:45:44.326808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-17 03:45:44.326816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-17 03:45:44.326824 | orchestrator | 2026-04-17 03:45:44.326832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.326840 | orchestrator | Friday 17 April 2026 03:45:36 +0000 (0:00:00.541) 0:00:01.372 ********** 2026-04-17 03:45:44.326848 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.326856 | orchestrator | 2026-04-17 03:45:44.326864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.326872 | orchestrator | Friday 17 April 2026 03:45:37 +0000 (0:00:00.224) 0:00:01.596 ********** 2026-04-17 03:45:44.326880 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.326889 | orchestrator | 2026-04-17 03:45:44.326897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.326905 | orchestrator | Friday 17 April 2026 03:45:37 +0000 (0:00:00.214) 0:00:01.811 ********** 2026-04-17 03:45:44.326913 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.326920 | orchestrator | 2026-04-17 03:45:44.326928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.326936 | orchestrator | Friday 17 April 2026 03:45:37 +0000 (0:00:00.215) 0:00:02.026 ********** 2026-04-17 03:45:44.326944 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.326952 | orchestrator | 2026-04-17 03:45:44.326960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.326968 | orchestrator | Friday 17 April 2026 03:45:37 +0000 (0:00:00.215) 0:00:02.242 ********** 2026-04-17 03:45:44.326976 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.326984 | orchestrator | 2026-04-17 03:45:44.326992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.327000 | orchestrator | Friday 17 April 2026 03:45:37 +0000 (0:00:00.217) 0:00:02.460 ********** 2026-04-17 03:45:44.327008 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327016 | orchestrator | 2026-04-17 03:45:44.327024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.327032 | orchestrator | Friday 17 April 2026 03:45:38 +0000 (0:00:00.221) 0:00:02.682 ********** 2026-04-17 03:45:44.327040 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327048 | orchestrator | 2026-04-17 03:45:44.327088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.327102 | orchestrator | Friday 17 April 2026 03:45:38 +0000 (0:00:00.212) 0:00:02.895 ********** 2026-04-17 03:45:44.327115 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327129 | orchestrator | 2026-04-17 03:45:44.327139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.327147 | orchestrator | Friday 17 April 2026 03:45:38 +0000 (0:00:00.214) 0:00:03.109 ********** 2026-04-17 03:45:44.327155 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d) 2026-04-17 03:45:44.327165 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d) 2026-04-17 03:45:44.327173 | orchestrator | 2026-04-17 03:45:44.327180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.327205 | orchestrator | Friday 17 April 2026 03:45:39 +0000 (0:00:00.690) 0:00:03.800 ********** 2026-04-17 03:45:44.327214 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c) 2026-04-17 03:45:44.327222 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c) 2026-04-17 03:45:44.327230 | orchestrator | 2026-04-17 03:45:44.327237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.327245 | orchestrator | Friday 17 April 2026 03:45:39 +0000 (0:00:00.680) 0:00:04.480 ********** 2026-04-17 03:45:44.327253 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098) 2026-04-17 03:45:44.327269 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098) 2026-04-17 03:45:44.327277 | orchestrator | 2026-04-17 03:45:44.327285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.327300 | orchestrator | Friday 17 April 2026 03:45:40 +0000 (0:00:00.961) 0:00:05.442 ********** 2026-04-17 03:45:44.327313 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b) 2026-04-17 03:45:44.327326 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b) 2026-04-17 03:45:44.327339 | orchestrator | 2026-04-17 03:45:44.327353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:45:44.327374 | orchestrator | Friday 17 April 2026 03:45:41 +0000 (0:00:00.490) 0:00:05.932 ********** 2026-04-17 03:45:44.327384 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 03:45:44.327392 | orchestrator | 2026-04-17 03:45:44.327400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:44.327407 | orchestrator | Friday 17 April 2026 03:45:41 +0000 (0:00:00.363) 0:00:06.296 ********** 2026-04-17 03:45:44.327415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-17 03:45:44.327423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-17 03:45:44.327431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-17 03:45:44.327439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-17 03:45:44.327446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-17 03:45:44.327454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-17 03:45:44.327462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-17 03:45:44.327469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-17 03:45:44.327477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-17 03:45:44.327485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-17 03:45:44.327492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-17 03:45:44.327500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-17 03:45:44.327508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-17 03:45:44.327515 | orchestrator | 2026-04-17 03:45:44.327523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:44.327531 | orchestrator | Friday 17 April 2026 03:45:42 +0000 (0:00:00.486) 0:00:06.782 ********** 2026-04-17 03:45:44.327539 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327547 | orchestrator | 2026-04-17 03:45:44.327554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:44.327562 | orchestrator | Friday 17 April 2026 03:45:42 +0000 (0:00:00.214) 0:00:06.997 ********** 2026-04-17 03:45:44.327570 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327578 | orchestrator | 2026-04-17 03:45:44.327586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:44.327594 | orchestrator | Friday 17 April 2026 03:45:42 +0000 (0:00:00.231) 0:00:07.229 ********** 2026-04-17 03:45:44.327602 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327609 | orchestrator | 2026-04-17 03:45:44.327617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:44.327631 | orchestrator | Friday 17 April 2026 03:45:42 +0000 (0:00:00.227) 0:00:07.456 ********** 2026-04-17 03:45:44.327639 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327647 | orchestrator | 2026-04-17 03:45:44.327655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:44.327663 | orchestrator | Friday 17 April 2026 03:45:43 +0000 (0:00:00.219) 0:00:07.675 ********** 2026-04-17 03:45:44.327670 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327678 | orchestrator | 2026-04-17 03:45:44.327686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:44.327694 | orchestrator | Friday 17 April 2026 03:45:43 +0000 (0:00:00.214) 0:00:07.890 ********** 2026-04-17 03:45:44.327701 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327709 | orchestrator | 2026-04-17 03:45:44.327717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:44.327725 | orchestrator | Friday 17 April 2026 03:45:44 +0000 (0:00:00.684) 0:00:08.574 ********** 2026-04-17 03:45:44.327733 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:44.327740 | orchestrator | 2026-04-17 03:45:44.327754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:52.574660 | orchestrator | Friday 17 April 2026 03:45:44 +0000 (0:00:00.227) 0:00:08.801 ********** 2026-04-17 03:45:52.574738 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.574746 | orchestrator | 2026-04-17 03:45:52.574752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:52.574757 | orchestrator | Friday 17 April 2026 03:45:44 +0000 (0:00:00.231) 0:00:09.033 ********** 2026-04-17 03:45:52.574762 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-17 03:45:52.574768 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-17 03:45:52.574773 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-17 03:45:52.574777 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-17 03:45:52.574781 | orchestrator | 2026-04-17 03:45:52.574786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:52.574790 | orchestrator | Friday 17 April 2026 03:45:45 +0000 (0:00:00.691) 0:00:09.724 ********** 2026-04-17 03:45:52.574794 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.574798 | orchestrator | 2026-04-17 03:45:52.574802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:52.574806 | orchestrator | Friday 17 April 2026 03:45:45 +0000 (0:00:00.229) 0:00:09.953 ********** 2026-04-17 03:45:52.574810 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.574814 | orchestrator | 2026-04-17 03:45:52.574818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:52.574822 | orchestrator | Friday 17 April 2026 03:45:45 +0000 (0:00:00.224) 0:00:10.178 ********** 2026-04-17 03:45:52.574839 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.574843 | orchestrator | 2026-04-17 03:45:52.574847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:45:52.574851 | orchestrator | Friday 17 April 2026 03:45:45 +0000 (0:00:00.235) 0:00:10.414 ********** 2026-04-17 03:45:52.574855 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.574859 | orchestrator | 2026-04-17 03:45:52.574864 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-17 03:45:52.574868 | orchestrator | Friday 17 April 2026 03:45:46 +0000 (0:00:00.213) 0:00:10.627 ********** 2026-04-17 03:45:52.574872 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.574876 | orchestrator | 2026-04-17 03:45:52.574880 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-17 03:45:52.574884 | orchestrator | Friday 17 April 2026 03:45:46 +0000 (0:00:00.153) 0:00:10.781 ********** 2026-04-17 03:45:52.574889 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba7178ba-163b-58b0-89b4-3a73c9468ec2'}}) 2026-04-17 03:45:52.574894 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}}) 2026-04-17 03:45:52.574912 | orchestrator | 2026-04-17 03:45:52.574917 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-17 03:45:52.574921 | orchestrator | Friday 17 April 2026 03:45:46 +0000 (0:00:00.195) 0:00:10.976 ********** 2026-04-17 03:45:52.574926 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}) 2026-04-17 03:45:52.574932 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}) 2026-04-17 03:45:52.574936 | orchestrator | 2026-04-17 03:45:52.574940 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-17 03:45:52.574944 | orchestrator | Friday 17 April 2026 03:45:48 +0000 (0:00:01.977) 0:00:12.954 ********** 2026-04-17 03:45:52.574948 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:52.574953 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:52.574957 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.574962 | orchestrator | 2026-04-17 03:45:52.574966 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-17 03:45:52.574970 | orchestrator | Friday 17 April 2026 03:45:48 +0000 (0:00:00.393) 0:00:13.348 ********** 2026-04-17 03:45:52.574974 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}) 2026-04-17 03:45:52.574978 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}) 2026-04-17 03:45:52.574982 | orchestrator | 2026-04-17 03:45:52.574986 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-17 03:45:52.574990 | orchestrator | Friday 17 April 2026 03:45:50 +0000 (0:00:01.522) 0:00:14.871 ********** 2026-04-17 03:45:52.574994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:52.574998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:52.575002 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575007 | orchestrator | 2026-04-17 03:45:52.575011 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-17 03:45:52.575015 | orchestrator | Friday 17 April 2026 03:45:50 +0000 (0:00:00.169) 0:00:15.040 ********** 2026-04-17 03:45:52.575029 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575034 | orchestrator | 2026-04-17 03:45:52.575038 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-17 03:45:52.575042 | orchestrator | Friday 17 April 2026 03:45:50 +0000 (0:00:00.142) 0:00:15.183 ********** 2026-04-17 03:45:52.575046 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:52.575050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:52.575054 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575058 | orchestrator | 2026-04-17 03:45:52.575097 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-17 03:45:52.575102 | orchestrator | Friday 17 April 2026 03:45:50 +0000 (0:00:00.144) 0:00:15.328 ********** 2026-04-17 03:45:52.575106 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575110 | orchestrator | 2026-04-17 03:45:52.575114 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-17 03:45:52.575122 | orchestrator | Friday 17 April 2026 03:45:50 +0000 (0:00:00.150) 0:00:15.478 ********** 2026-04-17 03:45:52.575131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:52.575135 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:52.575139 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575143 | orchestrator | 2026-04-17 03:45:52.575147 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-17 03:45:52.575151 | orchestrator | Friday 17 April 2026 03:45:51 +0000 (0:00:00.162) 0:00:15.641 ********** 2026-04-17 03:45:52.575155 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575159 | orchestrator | 2026-04-17 03:45:52.575163 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-17 03:45:52.575167 | orchestrator | Friday 17 April 2026 03:45:51 +0000 (0:00:00.147) 0:00:15.788 ********** 2026-04-17 03:45:52.575171 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:52.575175 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:52.575179 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575183 | orchestrator | 2026-04-17 03:45:52.575187 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-17 03:45:52.575191 | orchestrator | Friday 17 April 2026 03:45:51 +0000 (0:00:00.157) 0:00:15.945 ********** 2026-04-17 03:45:52.575196 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:45:52.575200 | orchestrator | 2026-04-17 03:45:52.575204 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-17 03:45:52.575208 | orchestrator | Friday 17 April 2026 03:45:51 +0000 (0:00:00.197) 0:00:16.142 ********** 2026-04-17 03:45:52.575212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:52.575216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:52.575221 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575228 | orchestrator | 2026-04-17 03:45:52.575235 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-17 03:45:52.575241 | orchestrator | Friday 17 April 2026 03:45:51 +0000 (0:00:00.198) 0:00:16.341 ********** 2026-04-17 03:45:52.575248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:52.575255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:52.575262 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575269 | orchestrator | 2026-04-17 03:45:52.575276 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-17 03:45:52.575283 | orchestrator | Friday 17 April 2026 03:45:52 +0000 (0:00:00.386) 0:00:16.728 ********** 2026-04-17 03:45:52.575290 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:52.575298 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:52.575303 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575311 | orchestrator | 2026-04-17 03:45:52.575316 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-17 03:45:52.575321 | orchestrator | Friday 17 April 2026 03:45:52 +0000 (0:00:00.173) 0:00:16.902 ********** 2026-04-17 03:45:52.575326 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:52.575330 | orchestrator | 2026-04-17 03:45:52.575335 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-17 03:45:52.575343 | orchestrator | Friday 17 April 2026 03:45:52 +0000 (0:00:00.153) 0:00:17.055 ********** 2026-04-17 03:45:59.654144 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654243 | orchestrator | 2026-04-17 03:45:59.654254 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-17 03:45:59.654261 | orchestrator | Friday 17 April 2026 03:45:52 +0000 (0:00:00.151) 0:00:17.207 ********** 2026-04-17 03:45:59.654266 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654271 | orchestrator | 2026-04-17 03:45:59.654277 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-17 03:45:59.654282 | orchestrator | Friday 17 April 2026 03:45:52 +0000 (0:00:00.147) 0:00:17.354 ********** 2026-04-17 03:45:59.654287 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 03:45:59.654292 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-17 03:45:59.654297 | orchestrator | } 2026-04-17 03:45:59.654302 | orchestrator | 2026-04-17 03:45:59.654307 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-17 03:45:59.654311 | orchestrator | Friday 17 April 2026 03:45:53 +0000 (0:00:00.184) 0:00:17.539 ********** 2026-04-17 03:45:59.654316 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 03:45:59.654321 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-17 03:45:59.654325 | orchestrator | } 2026-04-17 03:45:59.654331 | orchestrator | 2026-04-17 03:45:59.654338 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-17 03:45:59.654344 | orchestrator | Friday 17 April 2026 03:45:53 +0000 (0:00:00.148) 0:00:17.687 ********** 2026-04-17 03:45:59.654352 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 03:45:59.654418 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-17 03:45:59.654424 | orchestrator | } 2026-04-17 03:45:59.654429 | orchestrator | 2026-04-17 03:45:59.654434 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-17 03:45:59.654438 | orchestrator | Friday 17 April 2026 03:45:53 +0000 (0:00:00.151) 0:00:17.839 ********** 2026-04-17 03:45:59.654443 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:45:59.654448 | orchestrator | 2026-04-17 03:45:59.654453 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-17 03:45:59.654457 | orchestrator | Friday 17 April 2026 03:45:54 +0000 (0:00:00.693) 0:00:18.532 ********** 2026-04-17 03:45:59.654462 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:45:59.654466 | orchestrator | 2026-04-17 03:45:59.654471 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-17 03:45:59.654476 | orchestrator | Friday 17 April 2026 03:45:54 +0000 (0:00:00.542) 0:00:19.075 ********** 2026-04-17 03:45:59.654480 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:45:59.654485 | orchestrator | 2026-04-17 03:45:59.654490 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-17 03:45:59.654494 | orchestrator | Friday 17 April 2026 03:45:55 +0000 (0:00:00.558) 0:00:19.633 ********** 2026-04-17 03:45:59.654499 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:45:59.654503 | orchestrator | 2026-04-17 03:45:59.654508 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-17 03:45:59.654513 | orchestrator | Friday 17 April 2026 03:45:55 +0000 (0:00:00.401) 0:00:20.034 ********** 2026-04-17 03:45:59.654517 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654523 | orchestrator | 2026-04-17 03:45:59.654530 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-17 03:45:59.654537 | orchestrator | Friday 17 April 2026 03:45:55 +0000 (0:00:00.112) 0:00:20.147 ********** 2026-04-17 03:45:59.654544 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654571 | orchestrator | 2026-04-17 03:45:59.654579 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-17 03:45:59.654586 | orchestrator | Friday 17 April 2026 03:45:55 +0000 (0:00:00.138) 0:00:20.285 ********** 2026-04-17 03:45:59.654593 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 03:45:59.654600 | orchestrator |  "vgs_report": { 2026-04-17 03:45:59.654607 | orchestrator |  "vg": [] 2026-04-17 03:45:59.654615 | orchestrator |  } 2026-04-17 03:45:59.654623 | orchestrator | } 2026-04-17 03:45:59.654631 | orchestrator | 2026-04-17 03:45:59.654639 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-17 03:45:59.654647 | orchestrator | Friday 17 April 2026 03:45:55 +0000 (0:00:00.148) 0:00:20.434 ********** 2026-04-17 03:45:59.654655 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654662 | orchestrator | 2026-04-17 03:45:59.654669 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-17 03:45:59.654677 | orchestrator | Friday 17 April 2026 03:45:56 +0000 (0:00:00.152) 0:00:20.586 ********** 2026-04-17 03:45:59.654684 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654691 | orchestrator | 2026-04-17 03:45:59.654699 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-17 03:45:59.654706 | orchestrator | Friday 17 April 2026 03:45:56 +0000 (0:00:00.147) 0:00:20.734 ********** 2026-04-17 03:45:59.654714 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654720 | orchestrator | 2026-04-17 03:45:59.654727 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-17 03:45:59.654734 | orchestrator | Friday 17 April 2026 03:45:56 +0000 (0:00:00.151) 0:00:20.885 ********** 2026-04-17 03:45:59.654741 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654749 | orchestrator | 2026-04-17 03:45:59.654755 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-17 03:45:59.654762 | orchestrator | Friday 17 April 2026 03:45:56 +0000 (0:00:00.151) 0:00:21.036 ********** 2026-04-17 03:45:59.654770 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654778 | orchestrator | 2026-04-17 03:45:59.654785 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-17 03:45:59.654792 | orchestrator | Friday 17 April 2026 03:45:56 +0000 (0:00:00.132) 0:00:21.169 ********** 2026-04-17 03:45:59.654798 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654805 | orchestrator | 2026-04-17 03:45:59.654812 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-17 03:45:59.654820 | orchestrator | Friday 17 April 2026 03:45:56 +0000 (0:00:00.142) 0:00:21.312 ********** 2026-04-17 03:45:59.654828 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654836 | orchestrator | 2026-04-17 03:45:59.654844 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-17 03:45:59.654852 | orchestrator | Friday 17 April 2026 03:45:56 +0000 (0:00:00.148) 0:00:21.461 ********** 2026-04-17 03:45:59.654876 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654884 | orchestrator | 2026-04-17 03:45:59.654892 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-17 03:45:59.654899 | orchestrator | Friday 17 April 2026 03:45:57 +0000 (0:00:00.454) 0:00:21.915 ********** 2026-04-17 03:45:59.654907 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654915 | orchestrator | 2026-04-17 03:45:59.654922 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-17 03:45:59.654930 | orchestrator | Friday 17 April 2026 03:45:57 +0000 (0:00:00.143) 0:00:22.059 ********** 2026-04-17 03:45:59.654939 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654945 | orchestrator | 2026-04-17 03:45:59.654953 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-17 03:45:59.654961 | orchestrator | Friday 17 April 2026 03:45:57 +0000 (0:00:00.153) 0:00:22.213 ********** 2026-04-17 03:45:59.654967 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.654974 | orchestrator | 2026-04-17 03:45:59.654981 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-17 03:45:59.654999 | orchestrator | Friday 17 April 2026 03:45:57 +0000 (0:00:00.166) 0:00:22.379 ********** 2026-04-17 03:45:59.655007 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.655014 | orchestrator | 2026-04-17 03:45:59.655022 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-17 03:45:59.655030 | orchestrator | Friday 17 April 2026 03:45:58 +0000 (0:00:00.167) 0:00:22.547 ********** 2026-04-17 03:45:59.655044 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.655052 | orchestrator | 2026-04-17 03:45:59.655058 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-17 03:45:59.655062 | orchestrator | Friday 17 April 2026 03:45:58 +0000 (0:00:00.167) 0:00:22.714 ********** 2026-04-17 03:45:59.655084 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.655090 | orchestrator | 2026-04-17 03:45:59.655094 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-17 03:45:59.655099 | orchestrator | Friday 17 April 2026 03:45:58 +0000 (0:00:00.149) 0:00:22.864 ********** 2026-04-17 03:45:59.655105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:59.655111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:59.655116 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.655121 | orchestrator | 2026-04-17 03:45:59.655128 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-17 03:45:59.655135 | orchestrator | Friday 17 April 2026 03:45:58 +0000 (0:00:00.174) 0:00:23.038 ********** 2026-04-17 03:45:59.655142 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:59.655149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:59.655157 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.655165 | orchestrator | 2026-04-17 03:45:59.655171 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-17 03:45:59.655175 | orchestrator | Friday 17 April 2026 03:45:58 +0000 (0:00:00.161) 0:00:23.200 ********** 2026-04-17 03:45:59.655180 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:59.655184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:59.655189 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.655193 | orchestrator | 2026-04-17 03:45:59.655198 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-17 03:45:59.655203 | orchestrator | Friday 17 April 2026 03:45:58 +0000 (0:00:00.184) 0:00:23.384 ********** 2026-04-17 03:45:59.655207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:59.655212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:59.655216 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.655221 | orchestrator | 2026-04-17 03:45:59.655225 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-17 03:45:59.655230 | orchestrator | Friday 17 April 2026 03:45:59 +0000 (0:00:00.170) 0:00:23.555 ********** 2026-04-17 03:45:59.655234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:45:59.655244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:45:59.655249 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:45:59.655253 | orchestrator | 2026-04-17 03:45:59.655258 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-17 03:45:59.655262 | orchestrator | Friday 17 April 2026 03:45:59 +0000 (0:00:00.406) 0:00:23.961 ********** 2026-04-17 03:45:59.655273 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:46:05.202938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:46:05.203096 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:46:05.203117 | orchestrator | 2026-04-17 03:46:05.203130 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-17 03:46:05.203143 | orchestrator | Friday 17 April 2026 03:45:59 +0000 (0:00:00.172) 0:00:24.134 ********** 2026-04-17 03:46:05.203155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:46:05.203166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:46:05.203189 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:46:05.203200 | orchestrator | 2026-04-17 03:46:05.203211 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-17 03:46:05.203222 | orchestrator | Friday 17 April 2026 03:45:59 +0000 (0:00:00.181) 0:00:24.315 ********** 2026-04-17 03:46:05.203250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:46:05.203262 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:46:05.203273 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:46:05.203284 | orchestrator | 2026-04-17 03:46:05.203295 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-17 03:46:05.203306 | orchestrator | Friday 17 April 2026 03:45:59 +0000 (0:00:00.158) 0:00:24.474 ********** 2026-04-17 03:46:05.203317 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:46:05.203329 | orchestrator | 2026-04-17 03:46:05.203340 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-17 03:46:05.203351 | orchestrator | Friday 17 April 2026 03:46:00 +0000 (0:00:00.509) 0:00:24.984 ********** 2026-04-17 03:46:05.203361 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:46:05.203372 | orchestrator | 2026-04-17 03:46:05.203383 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-17 03:46:05.203394 | orchestrator | Friday 17 April 2026 03:46:01 +0000 (0:00:00.530) 0:00:25.515 ********** 2026-04-17 03:46:05.203405 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:46:05.203416 | orchestrator | 2026-04-17 03:46:05.203427 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-17 03:46:05.203437 | orchestrator | Friday 17 April 2026 03:46:01 +0000 (0:00:00.156) 0:00:25.671 ********** 2026-04-17 03:46:05.203449 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'vg_name': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}) 2026-04-17 03:46:05.203464 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'vg_name': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}) 2026-04-17 03:46:05.203478 | orchestrator | 2026-04-17 03:46:05.203491 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-17 03:46:05.203526 | orchestrator | Friday 17 April 2026 03:46:01 +0000 (0:00:00.171) 0:00:25.843 ********** 2026-04-17 03:46:05.203541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:46:05.203553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:46:05.203566 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:46:05.203579 | orchestrator | 2026-04-17 03:46:05.203592 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-17 03:46:05.203605 | orchestrator | Friday 17 April 2026 03:46:01 +0000 (0:00:00.162) 0:00:26.006 ********** 2026-04-17 03:46:05.203618 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:46:05.203631 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:46:05.203644 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:46:05.203657 | orchestrator | 2026-04-17 03:46:05.203669 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-17 03:46:05.203682 | orchestrator | Friday 17 April 2026 03:46:01 +0000 (0:00:00.167) 0:00:26.174 ********** 2026-04-17 03:46:05.203695 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 03:46:05.203722 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 03:46:05.203745 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:46:05.203758 | orchestrator | 2026-04-17 03:46:05.203770 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-17 03:46:05.203783 | orchestrator | Friday 17 April 2026 03:46:01 +0000 (0:00:00.170) 0:00:26.345 ********** 2026-04-17 03:46:05.203814 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 03:46:05.203827 | orchestrator |  "lvm_report": { 2026-04-17 03:46:05.203839 | orchestrator |  "lv": [ 2026-04-17 03:46:05.203850 | orchestrator |  { 2026-04-17 03:46:05.203861 | orchestrator |  "lv_name": "osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08", 2026-04-17 03:46:05.203873 | orchestrator |  "vg_name": "ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08" 2026-04-17 03:46:05.203884 | orchestrator |  }, 2026-04-17 03:46:05.203895 | orchestrator |  { 2026-04-17 03:46:05.203906 | orchestrator |  "lv_name": "osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2", 2026-04-17 03:46:05.203917 | orchestrator |  "vg_name": "ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2" 2026-04-17 03:46:05.203928 | orchestrator |  } 2026-04-17 03:46:05.203939 | orchestrator |  ], 2026-04-17 03:46:05.203950 | orchestrator |  "pv": [ 2026-04-17 03:46:05.203961 | orchestrator |  { 2026-04-17 03:46:05.203971 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-17 03:46:05.203983 | orchestrator |  "vg_name": "ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2" 2026-04-17 03:46:05.203993 | orchestrator |  }, 2026-04-17 03:46:05.204004 | orchestrator |  { 2026-04-17 03:46:05.204015 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-17 03:46:05.204032 | orchestrator |  "vg_name": "ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08" 2026-04-17 03:46:05.204044 | orchestrator |  } 2026-04-17 03:46:05.204054 | orchestrator |  ] 2026-04-17 03:46:05.204065 | orchestrator |  } 2026-04-17 03:46:05.204098 | orchestrator | } 2026-04-17 03:46:05.204110 | orchestrator | 2026-04-17 03:46:05.204121 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-17 03:46:05.204140 | orchestrator | 2026-04-17 03:46:05.204151 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 03:46:05.204162 | orchestrator | Friday 17 April 2026 03:46:02 +0000 (0:00:00.577) 0:00:26.923 ********** 2026-04-17 03:46:05.204174 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-17 03:46:05.204185 | orchestrator | 2026-04-17 03:46:05.204197 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 03:46:05.204208 | orchestrator | Friday 17 April 2026 03:46:02 +0000 (0:00:00.281) 0:00:27.205 ********** 2026-04-17 03:46:05.204220 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:05.204239 | orchestrator | 2026-04-17 03:46:05.204257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:05.204274 | orchestrator | Friday 17 April 2026 03:46:02 +0000 (0:00:00.245) 0:00:27.450 ********** 2026-04-17 03:46:05.204297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-17 03:46:05.204322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-17 03:46:05.204340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-17 03:46:05.204357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-17 03:46:05.204375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-17 03:46:05.204391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-17 03:46:05.204409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-17 03:46:05.204428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-17 03:46:05.204445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-17 03:46:05.204463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-17 03:46:05.204479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-17 03:46:05.204496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-17 03:46:05.204513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-17 03:46:05.204531 | orchestrator | 2026-04-17 03:46:05.204549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:05.204568 | orchestrator | Friday 17 April 2026 03:46:03 +0000 (0:00:00.440) 0:00:27.890 ********** 2026-04-17 03:46:05.204580 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:05.204590 | orchestrator | 2026-04-17 03:46:05.204601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:05.204612 | orchestrator | Friday 17 April 2026 03:46:03 +0000 (0:00:00.203) 0:00:28.094 ********** 2026-04-17 03:46:05.204622 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:05.204633 | orchestrator | 2026-04-17 03:46:05.204644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:05.204654 | orchestrator | Friday 17 April 2026 03:46:03 +0000 (0:00:00.207) 0:00:28.302 ********** 2026-04-17 03:46:05.204665 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:05.204676 | orchestrator | 2026-04-17 03:46:05.204687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:05.204697 | orchestrator | Friday 17 April 2026 03:46:04 +0000 (0:00:00.218) 0:00:28.520 ********** 2026-04-17 03:46:05.204708 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:05.204719 | orchestrator | 2026-04-17 03:46:05.204729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:05.204740 | orchestrator | Friday 17 April 2026 03:46:04 +0000 (0:00:00.221) 0:00:28.741 ********** 2026-04-17 03:46:05.204751 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:05.204761 | orchestrator | 2026-04-17 03:46:05.204783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:05.204794 | orchestrator | Friday 17 April 2026 03:46:04 +0000 (0:00:00.199) 0:00:28.941 ********** 2026-04-17 03:46:05.204804 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:05.204815 | orchestrator | 2026-04-17 03:46:05.204837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:15.933169 | orchestrator | Friday 17 April 2026 03:46:05 +0000 (0:00:00.739) 0:00:29.681 ********** 2026-04-17 03:46:15.933253 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933261 | orchestrator | 2026-04-17 03:46:15.933267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:15.933272 | orchestrator | Friday 17 April 2026 03:46:05 +0000 (0:00:00.226) 0:00:29.908 ********** 2026-04-17 03:46:15.933277 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933282 | orchestrator | 2026-04-17 03:46:15.933287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:15.933292 | orchestrator | Friday 17 April 2026 03:46:05 +0000 (0:00:00.212) 0:00:30.121 ********** 2026-04-17 03:46:15.933297 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6) 2026-04-17 03:46:15.933304 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6) 2026-04-17 03:46:15.933309 | orchestrator | 2026-04-17 03:46:15.933313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:15.933318 | orchestrator | Friday 17 April 2026 03:46:06 +0000 (0:00:00.466) 0:00:30.587 ********** 2026-04-17 03:46:15.933335 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4) 2026-04-17 03:46:15.933340 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4) 2026-04-17 03:46:15.933345 | orchestrator | 2026-04-17 03:46:15.933350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:15.933354 | orchestrator | Friday 17 April 2026 03:46:06 +0000 (0:00:00.466) 0:00:31.054 ********** 2026-04-17 03:46:15.933359 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96) 2026-04-17 03:46:15.933363 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96) 2026-04-17 03:46:15.933368 | orchestrator | 2026-04-17 03:46:15.933373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:15.933377 | orchestrator | Friday 17 April 2026 03:46:07 +0000 (0:00:00.448) 0:00:31.503 ********** 2026-04-17 03:46:15.933382 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784) 2026-04-17 03:46:15.933387 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784) 2026-04-17 03:46:15.933391 | orchestrator | 2026-04-17 03:46:15.933396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:15.933400 | orchestrator | Friday 17 April 2026 03:46:07 +0000 (0:00:00.488) 0:00:31.992 ********** 2026-04-17 03:46:15.933405 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 03:46:15.933409 | orchestrator | 2026-04-17 03:46:15.933414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933419 | orchestrator | Friday 17 April 2026 03:46:07 +0000 (0:00:00.351) 0:00:32.343 ********** 2026-04-17 03:46:15.933423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-17 03:46:15.933429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-17 03:46:15.933433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-17 03:46:15.933438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-17 03:46:15.933458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-17 03:46:15.933462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-17 03:46:15.933467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-17 03:46:15.933471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-17 03:46:15.933476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-17 03:46:15.933481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-17 03:46:15.933485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-17 03:46:15.933490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-17 03:46:15.933494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-17 03:46:15.933499 | orchestrator | 2026-04-17 03:46:15.933503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933508 | orchestrator | Friday 17 April 2026 03:46:08 +0000 (0:00:00.412) 0:00:32.755 ********** 2026-04-17 03:46:15.933512 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933517 | orchestrator | 2026-04-17 03:46:15.933521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933526 | orchestrator | Friday 17 April 2026 03:46:08 +0000 (0:00:00.203) 0:00:32.959 ********** 2026-04-17 03:46:15.933530 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933535 | orchestrator | 2026-04-17 03:46:15.933542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933549 | orchestrator | Friday 17 April 2026 03:46:08 +0000 (0:00:00.215) 0:00:33.175 ********** 2026-04-17 03:46:15.933556 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933563 | orchestrator | 2026-04-17 03:46:15.933584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933593 | orchestrator | Friday 17 April 2026 03:46:09 +0000 (0:00:00.757) 0:00:33.933 ********** 2026-04-17 03:46:15.933600 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933608 | orchestrator | 2026-04-17 03:46:15.933616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933623 | orchestrator | Friday 17 April 2026 03:46:09 +0000 (0:00:00.224) 0:00:34.157 ********** 2026-04-17 03:46:15.933631 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933639 | orchestrator | 2026-04-17 03:46:15.933647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933654 | orchestrator | Friday 17 April 2026 03:46:09 +0000 (0:00:00.213) 0:00:34.371 ********** 2026-04-17 03:46:15.933662 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933669 | orchestrator | 2026-04-17 03:46:15.933673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933678 | orchestrator | Friday 17 April 2026 03:46:10 +0000 (0:00:00.212) 0:00:34.583 ********** 2026-04-17 03:46:15.933683 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933688 | orchestrator | 2026-04-17 03:46:15.933693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933703 | orchestrator | Friday 17 April 2026 03:46:10 +0000 (0:00:00.220) 0:00:34.804 ********** 2026-04-17 03:46:15.933708 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933713 | orchestrator | 2026-04-17 03:46:15.933718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933723 | orchestrator | Friday 17 April 2026 03:46:10 +0000 (0:00:00.218) 0:00:35.023 ********** 2026-04-17 03:46:15.933728 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-17 03:46:15.933734 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-17 03:46:15.933739 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-17 03:46:15.933750 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-17 03:46:15.933755 | orchestrator | 2026-04-17 03:46:15.933760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933765 | orchestrator | Friday 17 April 2026 03:46:11 +0000 (0:00:00.665) 0:00:35.688 ********** 2026-04-17 03:46:15.933770 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933776 | orchestrator | 2026-04-17 03:46:15.933781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933786 | orchestrator | Friday 17 April 2026 03:46:11 +0000 (0:00:00.206) 0:00:35.894 ********** 2026-04-17 03:46:15.933792 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933799 | orchestrator | 2026-04-17 03:46:15.933807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933814 | orchestrator | Friday 17 April 2026 03:46:11 +0000 (0:00:00.206) 0:00:36.100 ********** 2026-04-17 03:46:15.933822 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933829 | orchestrator | 2026-04-17 03:46:15.933836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:15.933844 | orchestrator | Friday 17 April 2026 03:46:11 +0000 (0:00:00.207) 0:00:36.308 ********** 2026-04-17 03:46:15.933851 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933858 | orchestrator | 2026-04-17 03:46:15.933867 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-17 03:46:15.933873 | orchestrator | Friday 17 April 2026 03:46:12 +0000 (0:00:00.200) 0:00:36.508 ********** 2026-04-17 03:46:15.933880 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.933887 | orchestrator | 2026-04-17 03:46:15.933894 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-17 03:46:15.933900 | orchestrator | Friday 17 April 2026 03:46:12 +0000 (0:00:00.393) 0:00:36.902 ********** 2026-04-17 03:46:15.933908 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2b01680-30d5-524c-a810-0db40fd977fd'}}) 2026-04-17 03:46:15.933917 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1504e56e-19fb-5fe8-bf47-cc017f2297d0'}}) 2026-04-17 03:46:15.933923 | orchestrator | 2026-04-17 03:46:15.933930 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-17 03:46:15.933937 | orchestrator | Friday 17 April 2026 03:46:12 +0000 (0:00:00.197) 0:00:37.100 ********** 2026-04-17 03:46:15.933945 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}) 2026-04-17 03:46:15.933953 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}) 2026-04-17 03:46:15.933960 | orchestrator | 2026-04-17 03:46:15.933968 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-17 03:46:15.933976 | orchestrator | Friday 17 April 2026 03:46:14 +0000 (0:00:01.797) 0:00:38.897 ********** 2026-04-17 03:46:15.933984 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:15.933993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:15.934002 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:15.934009 | orchestrator | 2026-04-17 03:46:15.934072 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-17 03:46:15.934097 | orchestrator | Friday 17 April 2026 03:46:14 +0000 (0:00:00.157) 0:00:39.055 ********** 2026-04-17 03:46:15.934105 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}) 2026-04-17 03:46:15.934120 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}) 2026-04-17 03:46:21.903869 | orchestrator | 2026-04-17 03:46:21.903960 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-17 03:46:21.903970 | orchestrator | Friday 17 April 2026 03:46:15 +0000 (0:00:01.355) 0:00:40.410 ********** 2026-04-17 03:46:21.903977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:21.903986 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:21.903992 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904000 | orchestrator | 2026-04-17 03:46:21.904006 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-17 03:46:21.904012 | orchestrator | Friday 17 April 2026 03:46:16 +0000 (0:00:00.158) 0:00:40.568 ********** 2026-04-17 03:46:21.904031 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904038 | orchestrator | 2026-04-17 03:46:21.904044 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-17 03:46:21.904050 | orchestrator | Friday 17 April 2026 03:46:16 +0000 (0:00:00.143) 0:00:40.712 ********** 2026-04-17 03:46:21.904057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:21.904063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:21.904070 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904076 | orchestrator | 2026-04-17 03:46:21.904082 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-17 03:46:21.904114 | orchestrator | Friday 17 April 2026 03:46:16 +0000 (0:00:00.174) 0:00:40.887 ********** 2026-04-17 03:46:21.904120 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904127 | orchestrator | 2026-04-17 03:46:21.904133 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-17 03:46:21.904139 | orchestrator | Friday 17 April 2026 03:46:16 +0000 (0:00:00.146) 0:00:41.034 ********** 2026-04-17 03:46:21.904146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:21.904152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:21.904158 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904164 | orchestrator | 2026-04-17 03:46:21.904171 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-17 03:46:21.904177 | orchestrator | Friday 17 April 2026 03:46:16 +0000 (0:00:00.170) 0:00:41.204 ********** 2026-04-17 03:46:21.904184 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904190 | orchestrator | 2026-04-17 03:46:21.904196 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-17 03:46:21.904202 | orchestrator | Friday 17 April 2026 03:46:16 +0000 (0:00:00.163) 0:00:41.367 ********** 2026-04-17 03:46:21.904208 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:21.904215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:21.904221 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904227 | orchestrator | 2026-04-17 03:46:21.904233 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-17 03:46:21.904240 | orchestrator | Friday 17 April 2026 03:46:17 +0000 (0:00:00.160) 0:00:41.528 ********** 2026-04-17 03:46:21.904264 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:21.904272 | orchestrator | 2026-04-17 03:46:21.904278 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-17 03:46:21.904284 | orchestrator | Friday 17 April 2026 03:46:17 +0000 (0:00:00.176) 0:00:41.705 ********** 2026-04-17 03:46:21.904290 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:21.904296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:21.904303 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904309 | orchestrator | 2026-04-17 03:46:21.904315 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-17 03:46:21.904321 | orchestrator | Friday 17 April 2026 03:46:17 +0000 (0:00:00.461) 0:00:42.166 ********** 2026-04-17 03:46:21.904327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:21.904334 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:21.904340 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904346 | orchestrator | 2026-04-17 03:46:21.904352 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-17 03:46:21.904370 | orchestrator | Friday 17 April 2026 03:46:17 +0000 (0:00:00.194) 0:00:42.361 ********** 2026-04-17 03:46:21.904377 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:21.904383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:21.904389 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904395 | orchestrator | 2026-04-17 03:46:21.904401 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-17 03:46:21.904407 | orchestrator | Friday 17 April 2026 03:46:18 +0000 (0:00:00.161) 0:00:42.523 ********** 2026-04-17 03:46:21.904415 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904422 | orchestrator | 2026-04-17 03:46:21.904429 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-17 03:46:21.904440 | orchestrator | Friday 17 April 2026 03:46:18 +0000 (0:00:00.146) 0:00:42.670 ********** 2026-04-17 03:46:21.904447 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904454 | orchestrator | 2026-04-17 03:46:21.904461 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-17 03:46:21.904467 | orchestrator | Friday 17 April 2026 03:46:18 +0000 (0:00:00.143) 0:00:42.814 ********** 2026-04-17 03:46:21.904474 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904481 | orchestrator | 2026-04-17 03:46:21.904488 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-17 03:46:21.904495 | orchestrator | Friday 17 April 2026 03:46:18 +0000 (0:00:00.158) 0:00:42.972 ********** 2026-04-17 03:46:21.904502 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 03:46:21.904510 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-17 03:46:21.904517 | orchestrator | } 2026-04-17 03:46:21.904524 | orchestrator | 2026-04-17 03:46:21.904531 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-17 03:46:21.904538 | orchestrator | Friday 17 April 2026 03:46:18 +0000 (0:00:00.139) 0:00:43.111 ********** 2026-04-17 03:46:21.904545 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 03:46:21.904552 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-17 03:46:21.904558 | orchestrator | } 2026-04-17 03:46:21.904565 | orchestrator | 2026-04-17 03:46:21.904572 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-17 03:46:21.904585 | orchestrator | Friday 17 April 2026 03:46:18 +0000 (0:00:00.149) 0:00:43.260 ********** 2026-04-17 03:46:21.904591 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 03:46:21.904599 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-17 03:46:21.904606 | orchestrator | } 2026-04-17 03:46:21.904613 | orchestrator | 2026-04-17 03:46:21.904620 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-17 03:46:21.904639 | orchestrator | Friday 17 April 2026 03:46:18 +0000 (0:00:00.145) 0:00:43.406 ********** 2026-04-17 03:46:21.904646 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:21.904661 | orchestrator | 2026-04-17 03:46:21.904668 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-17 03:46:21.904675 | orchestrator | Friday 17 April 2026 03:46:19 +0000 (0:00:00.525) 0:00:43.932 ********** 2026-04-17 03:46:21.904682 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:21.904688 | orchestrator | 2026-04-17 03:46:21.904695 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-17 03:46:21.904702 | orchestrator | Friday 17 April 2026 03:46:19 +0000 (0:00:00.513) 0:00:44.445 ********** 2026-04-17 03:46:21.904709 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:21.904716 | orchestrator | 2026-04-17 03:46:21.904723 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-17 03:46:21.904729 | orchestrator | Friday 17 April 2026 03:46:20 +0000 (0:00:00.552) 0:00:44.998 ********** 2026-04-17 03:46:21.904736 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:21.904743 | orchestrator | 2026-04-17 03:46:21.904750 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-17 03:46:21.904757 | orchestrator | Friday 17 April 2026 03:46:20 +0000 (0:00:00.366) 0:00:45.364 ********** 2026-04-17 03:46:21.904764 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904771 | orchestrator | 2026-04-17 03:46:21.904778 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-17 03:46:21.904785 | orchestrator | Friday 17 April 2026 03:46:21 +0000 (0:00:00.133) 0:00:45.497 ********** 2026-04-17 03:46:21.904792 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904798 | orchestrator | 2026-04-17 03:46:21.904804 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-17 03:46:21.904810 | orchestrator | Friday 17 April 2026 03:46:21 +0000 (0:00:00.121) 0:00:45.619 ********** 2026-04-17 03:46:21.904817 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 03:46:21.904823 | orchestrator |  "vgs_report": { 2026-04-17 03:46:21.904830 | orchestrator |  "vg": [] 2026-04-17 03:46:21.904836 | orchestrator |  } 2026-04-17 03:46:21.904842 | orchestrator | } 2026-04-17 03:46:21.904848 | orchestrator | 2026-04-17 03:46:21.904854 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-17 03:46:21.904861 | orchestrator | Friday 17 April 2026 03:46:21 +0000 (0:00:00.156) 0:00:45.776 ********** 2026-04-17 03:46:21.904868 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904878 | orchestrator | 2026-04-17 03:46:21.904889 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-17 03:46:21.904900 | orchestrator | Friday 17 April 2026 03:46:21 +0000 (0:00:00.140) 0:00:45.916 ********** 2026-04-17 03:46:21.904913 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904927 | orchestrator | 2026-04-17 03:46:21.904938 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-17 03:46:21.904948 | orchestrator | Friday 17 April 2026 03:46:21 +0000 (0:00:00.165) 0:00:46.082 ********** 2026-04-17 03:46:21.904957 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.904966 | orchestrator | 2026-04-17 03:46:21.904976 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-17 03:46:21.904986 | orchestrator | Friday 17 April 2026 03:46:21 +0000 (0:00:00.143) 0:00:46.226 ********** 2026-04-17 03:46:21.904995 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:21.905004 | orchestrator | 2026-04-17 03:46:21.905019 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-17 03:46:26.827062 | orchestrator | Friday 17 April 2026 03:46:21 +0000 (0:00:00.156) 0:00:46.383 ********** 2026-04-17 03:46:26.827199 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827215 | orchestrator | 2026-04-17 03:46:26.827225 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-17 03:46:26.827235 | orchestrator | Friday 17 April 2026 03:46:22 +0000 (0:00:00.147) 0:00:46.531 ********** 2026-04-17 03:46:26.827244 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827253 | orchestrator | 2026-04-17 03:46:26.827261 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-17 03:46:26.827270 | orchestrator | Friday 17 April 2026 03:46:22 +0000 (0:00:00.148) 0:00:46.679 ********** 2026-04-17 03:46:26.827279 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827287 | orchestrator | 2026-04-17 03:46:26.827296 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-17 03:46:26.827319 | orchestrator | Friday 17 April 2026 03:46:22 +0000 (0:00:00.124) 0:00:46.803 ********** 2026-04-17 03:46:26.827329 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827337 | orchestrator | 2026-04-17 03:46:26.827346 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-17 03:46:26.827354 | orchestrator | Friday 17 April 2026 03:46:22 +0000 (0:00:00.145) 0:00:46.949 ********** 2026-04-17 03:46:26.827363 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827371 | orchestrator | 2026-04-17 03:46:26.827380 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-17 03:46:26.827388 | orchestrator | Friday 17 April 2026 03:46:22 +0000 (0:00:00.363) 0:00:47.313 ********** 2026-04-17 03:46:26.827397 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827405 | orchestrator | 2026-04-17 03:46:26.827414 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-17 03:46:26.827422 | orchestrator | Friday 17 April 2026 03:46:22 +0000 (0:00:00.134) 0:00:47.447 ********** 2026-04-17 03:46:26.827431 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827440 | orchestrator | 2026-04-17 03:46:26.827449 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-17 03:46:26.827458 | orchestrator | Friday 17 April 2026 03:46:23 +0000 (0:00:00.156) 0:00:47.603 ********** 2026-04-17 03:46:26.827466 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827475 | orchestrator | 2026-04-17 03:46:26.827483 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-17 03:46:26.827492 | orchestrator | Friday 17 April 2026 03:46:23 +0000 (0:00:00.139) 0:00:47.743 ********** 2026-04-17 03:46:26.827500 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827509 | orchestrator | 2026-04-17 03:46:26.827517 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-17 03:46:26.827526 | orchestrator | Friday 17 April 2026 03:46:23 +0000 (0:00:00.143) 0:00:47.887 ********** 2026-04-17 03:46:26.827534 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827543 | orchestrator | 2026-04-17 03:46:26.827551 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-17 03:46:26.827560 | orchestrator | Friday 17 April 2026 03:46:23 +0000 (0:00:00.131) 0:00:48.019 ********** 2026-04-17 03:46:26.827569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.827579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.827613 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827632 | orchestrator | 2026-04-17 03:46:26.827643 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-17 03:46:26.827653 | orchestrator | Friday 17 April 2026 03:46:23 +0000 (0:00:00.182) 0:00:48.201 ********** 2026-04-17 03:46:26.827664 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.827697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.827709 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827719 | orchestrator | 2026-04-17 03:46:26.827729 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-17 03:46:26.827739 | orchestrator | Friday 17 April 2026 03:46:23 +0000 (0:00:00.180) 0:00:48.382 ********** 2026-04-17 03:46:26.827749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.827760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.827769 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827779 | orchestrator | 2026-04-17 03:46:26.827789 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-17 03:46:26.827799 | orchestrator | Friday 17 April 2026 03:46:24 +0000 (0:00:00.164) 0:00:48.546 ********** 2026-04-17 03:46:26.827810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.827820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.827831 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827841 | orchestrator | 2026-04-17 03:46:26.827865 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-17 03:46:26.827876 | orchestrator | Friday 17 April 2026 03:46:24 +0000 (0:00:00.162) 0:00:48.709 ********** 2026-04-17 03:46:26.827886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.827896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.827906 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827916 | orchestrator | 2026-04-17 03:46:26.827926 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-17 03:46:26.827936 | orchestrator | Friday 17 April 2026 03:46:24 +0000 (0:00:00.161) 0:00:48.871 ********** 2026-04-17 03:46:26.827951 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.827961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.827970 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.827979 | orchestrator | 2026-04-17 03:46:26.827987 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-17 03:46:26.827996 | orchestrator | Friday 17 April 2026 03:46:24 +0000 (0:00:00.149) 0:00:49.021 ********** 2026-04-17 03:46:26.828005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.828014 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.828022 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.828031 | orchestrator | 2026-04-17 03:46:26.828040 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-17 03:46:26.828056 | orchestrator | Friday 17 April 2026 03:46:24 +0000 (0:00:00.374) 0:00:49.395 ********** 2026-04-17 03:46:26.828065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.828074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.828083 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.828122 | orchestrator | 2026-04-17 03:46:26.828132 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-17 03:46:26.828141 | orchestrator | Friday 17 April 2026 03:46:25 +0000 (0:00:00.163) 0:00:49.558 ********** 2026-04-17 03:46:26.828150 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:26.828159 | orchestrator | 2026-04-17 03:46:26.828168 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-17 03:46:26.828176 | orchestrator | Friday 17 April 2026 03:46:25 +0000 (0:00:00.526) 0:00:50.084 ********** 2026-04-17 03:46:26.828185 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:26.828193 | orchestrator | 2026-04-17 03:46:26.828202 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-17 03:46:26.828210 | orchestrator | Friday 17 April 2026 03:46:26 +0000 (0:00:00.546) 0:00:50.631 ********** 2026-04-17 03:46:26.828219 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:46:26.828227 | orchestrator | 2026-04-17 03:46:26.828236 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-17 03:46:26.828245 | orchestrator | Friday 17 April 2026 03:46:26 +0000 (0:00:00.160) 0:00:50.791 ********** 2026-04-17 03:46:26.828253 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'vg_name': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}) 2026-04-17 03:46:26.828264 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'vg_name': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}) 2026-04-17 03:46:26.828272 | orchestrator | 2026-04-17 03:46:26.828281 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-17 03:46:26.828289 | orchestrator | Friday 17 April 2026 03:46:26 +0000 (0:00:00.178) 0:00:50.970 ********** 2026-04-17 03:46:26.828300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.828315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:26.828329 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:26.828343 | orchestrator | 2026-04-17 03:46:26.828357 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-17 03:46:26.828371 | orchestrator | Friday 17 April 2026 03:46:26 +0000 (0:00:00.170) 0:00:51.141 ********** 2026-04-17 03:46:26.828385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:26.828409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:33.683338 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:33.683475 | orchestrator | 2026-04-17 03:46:33.683499 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-17 03:46:33.683519 | orchestrator | Friday 17 April 2026 03:46:26 +0000 (0:00:00.166) 0:00:51.307 ********** 2026-04-17 03:46:33.683538 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 03:46:33.683559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 03:46:33.683610 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:46:33.683630 | orchestrator | 2026-04-17 03:46:33.683668 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-17 03:46:33.683687 | orchestrator | Friday 17 April 2026 03:46:26 +0000 (0:00:00.166) 0:00:51.473 ********** 2026-04-17 03:46:33.683706 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 03:46:33.683724 | orchestrator |  "lvm_report": { 2026-04-17 03:46:33.683744 | orchestrator |  "lv": [ 2026-04-17 03:46:33.683762 | orchestrator |  { 2026-04-17 03:46:33.683780 | orchestrator |  "lv_name": "osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0", 2026-04-17 03:46:33.683800 | orchestrator |  "vg_name": "ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0" 2026-04-17 03:46:33.683817 | orchestrator |  }, 2026-04-17 03:46:33.683835 | orchestrator |  { 2026-04-17 03:46:33.683855 | orchestrator |  "lv_name": "osd-block-b2b01680-30d5-524c-a810-0db40fd977fd", 2026-04-17 03:46:33.683874 | orchestrator |  "vg_name": "ceph-b2b01680-30d5-524c-a810-0db40fd977fd" 2026-04-17 03:46:33.683892 | orchestrator |  } 2026-04-17 03:46:33.683911 | orchestrator |  ], 2026-04-17 03:46:33.683965 | orchestrator |  "pv": [ 2026-04-17 03:46:33.683984 | orchestrator |  { 2026-04-17 03:46:33.684003 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-17 03:46:33.684022 | orchestrator |  "vg_name": "ceph-b2b01680-30d5-524c-a810-0db40fd977fd" 2026-04-17 03:46:33.684041 | orchestrator |  }, 2026-04-17 03:46:33.684060 | orchestrator |  { 2026-04-17 03:46:33.684079 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-17 03:46:33.684125 | orchestrator |  "vg_name": "ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0" 2026-04-17 03:46:33.684145 | orchestrator |  } 2026-04-17 03:46:33.684163 | orchestrator |  ] 2026-04-17 03:46:33.684181 | orchestrator |  } 2026-04-17 03:46:33.684199 | orchestrator | } 2026-04-17 03:46:33.684218 | orchestrator | 2026-04-17 03:46:33.684235 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-17 03:46:33.684253 | orchestrator | 2026-04-17 03:46:33.684271 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 03:46:33.684289 | orchestrator | Friday 17 April 2026 03:46:27 +0000 (0:00:00.311) 0:00:51.785 ********** 2026-04-17 03:46:33.684307 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-17 03:46:33.684325 | orchestrator | 2026-04-17 03:46:33.684342 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 03:46:33.684360 | orchestrator | Friday 17 April 2026 03:46:28 +0000 (0:00:00.766) 0:00:52.552 ********** 2026-04-17 03:46:33.684379 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:33.684398 | orchestrator | 2026-04-17 03:46:33.684416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.684433 | orchestrator | Friday 17 April 2026 03:46:28 +0000 (0:00:00.250) 0:00:52.802 ********** 2026-04-17 03:46:33.684451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-17 03:46:33.684469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-17 03:46:33.684487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-17 03:46:33.684504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-17 03:46:33.684521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-17 03:46:33.684539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-17 03:46:33.684556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-17 03:46:33.684574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-17 03:46:33.684605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-17 03:46:33.684623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-17 03:46:33.684641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-17 03:46:33.684659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-17 03:46:33.684677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-17 03:46:33.684694 | orchestrator | 2026-04-17 03:46:33.684712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.684730 | orchestrator | Friday 17 April 2026 03:46:28 +0000 (0:00:00.421) 0:00:53.223 ********** 2026-04-17 03:46:33.684748 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:33.684786 | orchestrator | 2026-04-17 03:46:33.684804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.684822 | orchestrator | Friday 17 April 2026 03:46:28 +0000 (0:00:00.213) 0:00:53.437 ********** 2026-04-17 03:46:33.684840 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:33.684858 | orchestrator | 2026-04-17 03:46:33.684876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.684916 | orchestrator | Friday 17 April 2026 03:46:29 +0000 (0:00:00.209) 0:00:53.646 ********** 2026-04-17 03:46:33.684934 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:33.684952 | orchestrator | 2026-04-17 03:46:33.684970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.684987 | orchestrator | Friday 17 April 2026 03:46:29 +0000 (0:00:00.208) 0:00:53.855 ********** 2026-04-17 03:46:33.685005 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:33.685023 | orchestrator | 2026-04-17 03:46:33.685040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685058 | orchestrator | Friday 17 April 2026 03:46:29 +0000 (0:00:00.214) 0:00:54.070 ********** 2026-04-17 03:46:33.685076 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:33.685093 | orchestrator | 2026-04-17 03:46:33.685135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685153 | orchestrator | Friday 17 April 2026 03:46:29 +0000 (0:00:00.242) 0:00:54.312 ********** 2026-04-17 03:46:33.685171 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:33.685188 | orchestrator | 2026-04-17 03:46:33.685206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685224 | orchestrator | Friday 17 April 2026 03:46:30 +0000 (0:00:00.212) 0:00:54.524 ********** 2026-04-17 03:46:33.685242 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:33.685260 | orchestrator | 2026-04-17 03:46:33.685277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685295 | orchestrator | Friday 17 April 2026 03:46:30 +0000 (0:00:00.214) 0:00:54.739 ********** 2026-04-17 03:46:33.685313 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:33.685330 | orchestrator | 2026-04-17 03:46:33.685348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685366 | orchestrator | Friday 17 April 2026 03:46:30 +0000 (0:00:00.212) 0:00:54.951 ********** 2026-04-17 03:46:33.685383 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e) 2026-04-17 03:46:33.685403 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e) 2026-04-17 03:46:33.685421 | orchestrator | 2026-04-17 03:46:33.685438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685456 | orchestrator | Friday 17 April 2026 03:46:31 +0000 (0:00:00.993) 0:00:55.944 ********** 2026-04-17 03:46:33.685579 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac) 2026-04-17 03:46:33.685606 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac) 2026-04-17 03:46:33.685636 | orchestrator | 2026-04-17 03:46:33.685653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685671 | orchestrator | Friday 17 April 2026 03:46:31 +0000 (0:00:00.475) 0:00:56.420 ********** 2026-04-17 03:46:33.685689 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b) 2026-04-17 03:46:33.685707 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b) 2026-04-17 03:46:33.685724 | orchestrator | 2026-04-17 03:46:33.685741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685759 | orchestrator | Friday 17 April 2026 03:46:32 +0000 (0:00:00.484) 0:00:56.905 ********** 2026-04-17 03:46:33.685777 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134) 2026-04-17 03:46:33.685796 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134) 2026-04-17 03:46:33.685814 | orchestrator | 2026-04-17 03:46:33.685831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 03:46:33.685849 | orchestrator | Friday 17 April 2026 03:46:32 +0000 (0:00:00.480) 0:00:57.385 ********** 2026-04-17 03:46:33.685867 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 03:46:33.685885 | orchestrator | 2026-04-17 03:46:33.685947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:33.685965 | orchestrator | Friday 17 April 2026 03:46:33 +0000 (0:00:00.340) 0:00:57.726 ********** 2026-04-17 03:46:33.685983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-17 03:46:33.686001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-17 03:46:33.686142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-17 03:46:33.686166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-17 03:46:33.686184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-17 03:46:33.686202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-17 03:46:33.686220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-17 03:46:33.686238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-17 03:46:33.686256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-17 03:46:33.686274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-17 03:46:33.686292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-17 03:46:33.686325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-17 03:46:42.862812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-17 03:46:42.862942 | orchestrator | 2026-04-17 03:46:42.862964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.862975 | orchestrator | Friday 17 April 2026 03:46:33 +0000 (0:00:00.429) 0:00:58.155 ********** 2026-04-17 03:46:42.862987 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.862999 | orchestrator | 2026-04-17 03:46:42.863011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863024 | orchestrator | Friday 17 April 2026 03:46:33 +0000 (0:00:00.206) 0:00:58.362 ********** 2026-04-17 03:46:42.863036 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863047 | orchestrator | 2026-04-17 03:46:42.863078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863147 | orchestrator | Friday 17 April 2026 03:46:34 +0000 (0:00:00.227) 0:00:58.590 ********** 2026-04-17 03:46:42.863164 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863177 | orchestrator | 2026-04-17 03:46:42.863191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863204 | orchestrator | Friday 17 April 2026 03:46:34 +0000 (0:00:00.209) 0:00:58.800 ********** 2026-04-17 03:46:42.863218 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863231 | orchestrator | 2026-04-17 03:46:42.863244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863258 | orchestrator | Friday 17 April 2026 03:46:34 +0000 (0:00:00.203) 0:00:59.003 ********** 2026-04-17 03:46:42.863271 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863286 | orchestrator | 2026-04-17 03:46:42.863296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863307 | orchestrator | Friday 17 April 2026 03:46:35 +0000 (0:00:00.779) 0:00:59.783 ********** 2026-04-17 03:46:42.863320 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863339 | orchestrator | 2026-04-17 03:46:42.863354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863368 | orchestrator | Friday 17 April 2026 03:46:35 +0000 (0:00:00.216) 0:01:00.000 ********** 2026-04-17 03:46:42.863381 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863394 | orchestrator | 2026-04-17 03:46:42.863406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863417 | orchestrator | Friday 17 April 2026 03:46:35 +0000 (0:00:00.208) 0:01:00.208 ********** 2026-04-17 03:46:42.863430 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863444 | orchestrator | 2026-04-17 03:46:42.863457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863471 | orchestrator | Friday 17 April 2026 03:46:35 +0000 (0:00:00.238) 0:01:00.447 ********** 2026-04-17 03:46:42.863486 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-17 03:46:42.863501 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-17 03:46:42.863517 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-17 03:46:42.863528 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-17 03:46:42.863537 | orchestrator | 2026-04-17 03:46:42.863546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863555 | orchestrator | Friday 17 April 2026 03:46:36 +0000 (0:00:00.674) 0:01:01.121 ********** 2026-04-17 03:46:42.863564 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863574 | orchestrator | 2026-04-17 03:46:42.863583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863592 | orchestrator | Friday 17 April 2026 03:46:36 +0000 (0:00:00.212) 0:01:01.334 ********** 2026-04-17 03:46:42.863601 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863609 | orchestrator | 2026-04-17 03:46:42.863618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863628 | orchestrator | Friday 17 April 2026 03:46:37 +0000 (0:00:00.210) 0:01:01.545 ********** 2026-04-17 03:46:42.863638 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863651 | orchestrator | 2026-04-17 03:46:42.863665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 03:46:42.863679 | orchestrator | Friday 17 April 2026 03:46:37 +0000 (0:00:00.211) 0:01:01.757 ********** 2026-04-17 03:46:42.863692 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863706 | orchestrator | 2026-04-17 03:46:42.863720 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-17 03:46:42.863733 | orchestrator | Friday 17 April 2026 03:46:37 +0000 (0:00:00.207) 0:01:01.965 ********** 2026-04-17 03:46:42.863746 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863758 | orchestrator | 2026-04-17 03:46:42.863766 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-17 03:46:42.863773 | orchestrator | Friday 17 April 2026 03:46:37 +0000 (0:00:00.142) 0:01:02.107 ********** 2026-04-17 03:46:42.863793 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '690571ed-11b8-555e-b420-011f2882a19f'}}) 2026-04-17 03:46:42.863802 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '58d5b32d-9713-5f24-a4e2-aea701c9df8d'}}) 2026-04-17 03:46:42.863810 | orchestrator | 2026-04-17 03:46:42.863817 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-17 03:46:42.863825 | orchestrator | Friday 17 April 2026 03:46:37 +0000 (0:00:00.197) 0:01:02.304 ********** 2026-04-17 03:46:42.863834 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}) 2026-04-17 03:46:42.863844 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}) 2026-04-17 03:46:42.863852 | orchestrator | 2026-04-17 03:46:42.863860 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-17 03:46:42.863886 | orchestrator | Friday 17 April 2026 03:46:39 +0000 (0:00:01.764) 0:01:04.069 ********** 2026-04-17 03:46:42.863894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:42.863903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:42.863911 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.863919 | orchestrator | 2026-04-17 03:46:42.863927 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-17 03:46:42.863942 | orchestrator | Friday 17 April 2026 03:46:39 +0000 (0:00:00.391) 0:01:04.460 ********** 2026-04-17 03:46:42.863950 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}) 2026-04-17 03:46:42.863958 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}) 2026-04-17 03:46:42.863966 | orchestrator | 2026-04-17 03:46:42.863974 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-17 03:46:42.863982 | orchestrator | Friday 17 April 2026 03:46:41 +0000 (0:00:01.391) 0:01:05.852 ********** 2026-04-17 03:46:42.863990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:42.864004 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:42.864016 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.864028 | orchestrator | 2026-04-17 03:46:42.864042 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-17 03:46:42.864055 | orchestrator | Friday 17 April 2026 03:46:41 +0000 (0:00:00.176) 0:01:06.029 ********** 2026-04-17 03:46:42.864069 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.864081 | orchestrator | 2026-04-17 03:46:42.864095 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-17 03:46:42.864124 | orchestrator | Friday 17 April 2026 03:46:41 +0000 (0:00:00.159) 0:01:06.189 ********** 2026-04-17 03:46:42.864134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:42.864147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:42.864167 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.864181 | orchestrator | 2026-04-17 03:46:42.864205 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-17 03:46:42.864217 | orchestrator | Friday 17 April 2026 03:46:41 +0000 (0:00:00.169) 0:01:06.358 ********** 2026-04-17 03:46:42.864229 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.864242 | orchestrator | 2026-04-17 03:46:42.864254 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-17 03:46:42.864266 | orchestrator | Friday 17 April 2026 03:46:42 +0000 (0:00:00.152) 0:01:06.510 ********** 2026-04-17 03:46:42.864279 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:42.864290 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:42.864301 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.864313 | orchestrator | 2026-04-17 03:46:42.864325 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-17 03:46:42.864338 | orchestrator | Friday 17 April 2026 03:46:42 +0000 (0:00:00.166) 0:01:06.677 ********** 2026-04-17 03:46:42.864350 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.864364 | orchestrator | 2026-04-17 03:46:42.864378 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-17 03:46:42.864391 | orchestrator | Friday 17 April 2026 03:46:42 +0000 (0:00:00.166) 0:01:06.843 ********** 2026-04-17 03:46:42.864404 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:42.864416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:42.864429 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:42.864441 | orchestrator | 2026-04-17 03:46:42.864453 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-17 03:46:42.864480 | orchestrator | Friday 17 April 2026 03:46:42 +0000 (0:00:00.169) 0:01:07.013 ********** 2026-04-17 03:46:42.864494 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:42.864507 | orchestrator | 2026-04-17 03:46:42.864521 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-17 03:46:42.864535 | orchestrator | Friday 17 April 2026 03:46:42 +0000 (0:00:00.140) 0:01:07.154 ********** 2026-04-17 03:46:42.864560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:49.569764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:49.569880 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.569898 | orchestrator | 2026-04-17 03:46:49.569911 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-17 03:46:49.569924 | orchestrator | Friday 17 April 2026 03:46:42 +0000 (0:00:00.190) 0:01:07.344 ********** 2026-04-17 03:46:49.569952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:49.569964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:49.569975 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.569986 | orchestrator | 2026-04-17 03:46:49.569997 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-17 03:46:49.570009 | orchestrator | Friday 17 April 2026 03:46:43 +0000 (0:00:00.150) 0:01:07.494 ********** 2026-04-17 03:46:49.570076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:49.570153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:49.570174 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.570189 | orchestrator | 2026-04-17 03:46:49.570201 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-17 03:46:49.570212 | orchestrator | Friday 17 April 2026 03:46:43 +0000 (0:00:00.395) 0:01:07.890 ********** 2026-04-17 03:46:49.570222 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.570233 | orchestrator | 2026-04-17 03:46:49.570244 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-17 03:46:49.570255 | orchestrator | Friday 17 April 2026 03:46:43 +0000 (0:00:00.155) 0:01:08.045 ********** 2026-04-17 03:46:49.570266 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.570278 | orchestrator | 2026-04-17 03:46:49.570292 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-17 03:46:49.570304 | orchestrator | Friday 17 April 2026 03:46:43 +0000 (0:00:00.146) 0:01:08.192 ********** 2026-04-17 03:46:49.570317 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.570330 | orchestrator | 2026-04-17 03:46:49.570342 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-17 03:46:49.570354 | orchestrator | Friday 17 April 2026 03:46:43 +0000 (0:00:00.154) 0:01:08.347 ********** 2026-04-17 03:46:49.570367 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 03:46:49.570379 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-17 03:46:49.570392 | orchestrator | } 2026-04-17 03:46:49.570405 | orchestrator | 2026-04-17 03:46:49.570417 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-17 03:46:49.570429 | orchestrator | Friday 17 April 2026 03:46:44 +0000 (0:00:00.169) 0:01:08.516 ********** 2026-04-17 03:46:49.570457 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 03:46:49.570479 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-17 03:46:49.570491 | orchestrator | } 2026-04-17 03:46:49.570503 | orchestrator | 2026-04-17 03:46:49.570515 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-17 03:46:49.570527 | orchestrator | Friday 17 April 2026 03:46:44 +0000 (0:00:00.158) 0:01:08.674 ********** 2026-04-17 03:46:49.570539 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 03:46:49.570551 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-17 03:46:49.570564 | orchestrator | } 2026-04-17 03:46:49.570576 | orchestrator | 2026-04-17 03:46:49.570588 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-17 03:46:49.570600 | orchestrator | Friday 17 April 2026 03:46:44 +0000 (0:00:00.145) 0:01:08.820 ********** 2026-04-17 03:46:49.570612 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:49.570625 | orchestrator | 2026-04-17 03:46:49.570637 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-17 03:46:49.570649 | orchestrator | Friday 17 April 2026 03:46:44 +0000 (0:00:00.531) 0:01:09.352 ********** 2026-04-17 03:46:49.570660 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:49.570670 | orchestrator | 2026-04-17 03:46:49.570681 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-17 03:46:49.570692 | orchestrator | Friday 17 April 2026 03:46:45 +0000 (0:00:00.518) 0:01:09.871 ********** 2026-04-17 03:46:49.570703 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:49.570726 | orchestrator | 2026-04-17 03:46:49.570747 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-17 03:46:49.570758 | orchestrator | Friday 17 April 2026 03:46:45 +0000 (0:00:00.544) 0:01:10.415 ********** 2026-04-17 03:46:49.570768 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:49.570779 | orchestrator | 2026-04-17 03:46:49.570790 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-17 03:46:49.570801 | orchestrator | Friday 17 April 2026 03:46:46 +0000 (0:00:00.148) 0:01:10.563 ********** 2026-04-17 03:46:49.570820 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.570831 | orchestrator | 2026-04-17 03:46:49.570842 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-17 03:46:49.570853 | orchestrator | Friday 17 April 2026 03:46:46 +0000 (0:00:00.111) 0:01:10.675 ********** 2026-04-17 03:46:49.570864 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.570874 | orchestrator | 2026-04-17 03:46:49.570885 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-17 03:46:49.570896 | orchestrator | Friday 17 April 2026 03:46:46 +0000 (0:00:00.352) 0:01:11.027 ********** 2026-04-17 03:46:49.570906 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 03:46:49.570918 | orchestrator |  "vgs_report": { 2026-04-17 03:46:49.570930 | orchestrator |  "vg": [] 2026-04-17 03:46:49.570960 | orchestrator |  } 2026-04-17 03:46:49.570972 | orchestrator | } 2026-04-17 03:46:49.570983 | orchestrator | 2026-04-17 03:46:49.570994 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-17 03:46:49.571005 | orchestrator | Friday 17 April 2026 03:46:46 +0000 (0:00:00.194) 0:01:11.222 ********** 2026-04-17 03:46:49.571018 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571037 | orchestrator | 2026-04-17 03:46:49.571056 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-17 03:46:49.571075 | orchestrator | Friday 17 April 2026 03:46:46 +0000 (0:00:00.136) 0:01:11.359 ********** 2026-04-17 03:46:49.571096 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571186 | orchestrator | 2026-04-17 03:46:49.571201 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-17 03:46:49.571220 | orchestrator | Friday 17 April 2026 03:46:47 +0000 (0:00:00.157) 0:01:11.517 ********** 2026-04-17 03:46:49.571231 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571242 | orchestrator | 2026-04-17 03:46:49.571253 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-17 03:46:49.571264 | orchestrator | Friday 17 April 2026 03:46:47 +0000 (0:00:00.147) 0:01:11.664 ********** 2026-04-17 03:46:49.571274 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571285 | orchestrator | 2026-04-17 03:46:49.571296 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-17 03:46:49.571307 | orchestrator | Friday 17 April 2026 03:46:47 +0000 (0:00:00.152) 0:01:11.817 ********** 2026-04-17 03:46:49.571317 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571328 | orchestrator | 2026-04-17 03:46:49.571338 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-17 03:46:49.571349 | orchestrator | Friday 17 April 2026 03:46:47 +0000 (0:00:00.134) 0:01:11.952 ********** 2026-04-17 03:46:49.571360 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571370 | orchestrator | 2026-04-17 03:46:49.571381 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-17 03:46:49.571392 | orchestrator | Friday 17 April 2026 03:46:47 +0000 (0:00:00.158) 0:01:12.111 ********** 2026-04-17 03:46:49.571402 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571413 | orchestrator | 2026-04-17 03:46:49.571424 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-17 03:46:49.571435 | orchestrator | Friday 17 April 2026 03:46:47 +0000 (0:00:00.173) 0:01:12.284 ********** 2026-04-17 03:46:49.571445 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571456 | orchestrator | 2026-04-17 03:46:49.571467 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-17 03:46:49.571478 | orchestrator | Friday 17 April 2026 03:46:47 +0000 (0:00:00.138) 0:01:12.423 ********** 2026-04-17 03:46:49.571489 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571499 | orchestrator | 2026-04-17 03:46:49.571510 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-17 03:46:49.571521 | orchestrator | Friday 17 April 2026 03:46:48 +0000 (0:00:00.146) 0:01:12.569 ********** 2026-04-17 03:46:49.571531 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571542 | orchestrator | 2026-04-17 03:46:49.571561 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-17 03:46:49.571572 | orchestrator | Friday 17 April 2026 03:46:48 +0000 (0:00:00.147) 0:01:12.717 ********** 2026-04-17 03:46:49.571583 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571593 | orchestrator | 2026-04-17 03:46:49.571604 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-17 03:46:49.571615 | orchestrator | Friday 17 April 2026 03:46:48 +0000 (0:00:00.384) 0:01:13.101 ********** 2026-04-17 03:46:49.571626 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571636 | orchestrator | 2026-04-17 03:46:49.571647 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-17 03:46:49.571658 | orchestrator | Friday 17 April 2026 03:46:48 +0000 (0:00:00.149) 0:01:13.250 ********** 2026-04-17 03:46:49.571668 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571679 | orchestrator | 2026-04-17 03:46:49.571690 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-17 03:46:49.571700 | orchestrator | Friday 17 April 2026 03:46:48 +0000 (0:00:00.150) 0:01:13.401 ********** 2026-04-17 03:46:49.571711 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571722 | orchestrator | 2026-04-17 03:46:49.571732 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-17 03:46:49.571743 | orchestrator | Friday 17 April 2026 03:46:49 +0000 (0:00:00.152) 0:01:13.554 ********** 2026-04-17 03:46:49.571754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:49.571765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:49.571776 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571787 | orchestrator | 2026-04-17 03:46:49.571797 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-17 03:46:49.571808 | orchestrator | Friday 17 April 2026 03:46:49 +0000 (0:00:00.162) 0:01:13.716 ********** 2026-04-17 03:46:49.571819 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:49.571830 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:49.571841 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:49.571851 | orchestrator | 2026-04-17 03:46:49.571862 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-17 03:46:49.571873 | orchestrator | Friday 17 April 2026 03:46:49 +0000 (0:00:00.161) 0:01:13.878 ********** 2026-04-17 03:46:49.571894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.744510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.744617 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.744633 | orchestrator | 2026-04-17 03:46:52.744647 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-17 03:46:52.744660 | orchestrator | Friday 17 April 2026 03:46:49 +0000 (0:00:00.172) 0:01:14.051 ********** 2026-04-17 03:46:52.744717 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.744730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.744742 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.744753 | orchestrator | 2026-04-17 03:46:52.744790 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-17 03:46:52.744802 | orchestrator | Friday 17 April 2026 03:46:49 +0000 (0:00:00.167) 0:01:14.219 ********** 2026-04-17 03:46:52.744814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.744825 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.744836 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.744847 | orchestrator | 2026-04-17 03:46:52.744859 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-17 03:46:52.744870 | orchestrator | Friday 17 April 2026 03:46:49 +0000 (0:00:00.170) 0:01:14.389 ********** 2026-04-17 03:46:52.744881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.744892 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.744899 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.744906 | orchestrator | 2026-04-17 03:46:52.744913 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-17 03:46:52.744920 | orchestrator | Friday 17 April 2026 03:46:50 +0000 (0:00:00.159) 0:01:14.549 ********** 2026-04-17 03:46:52.744926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.744933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.744939 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.744946 | orchestrator | 2026-04-17 03:46:52.744953 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-17 03:46:52.744959 | orchestrator | Friday 17 April 2026 03:46:50 +0000 (0:00:00.158) 0:01:14.708 ********** 2026-04-17 03:46:52.744966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.744973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.744979 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.744986 | orchestrator | 2026-04-17 03:46:52.744992 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-17 03:46:52.744999 | orchestrator | Friday 17 April 2026 03:46:50 +0000 (0:00:00.174) 0:01:14.882 ********** 2026-04-17 03:46:52.745006 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:52.745014 | orchestrator | 2026-04-17 03:46:52.745020 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-17 03:46:52.745027 | orchestrator | Friday 17 April 2026 03:46:51 +0000 (0:00:00.807) 0:01:15.689 ********** 2026-04-17 03:46:52.745033 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:52.745040 | orchestrator | 2026-04-17 03:46:52.745047 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-17 03:46:52.745053 | orchestrator | Friday 17 April 2026 03:46:51 +0000 (0:00:00.506) 0:01:16.196 ********** 2026-04-17 03:46:52.745060 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:46:52.745067 | orchestrator | 2026-04-17 03:46:52.745074 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-17 03:46:52.745080 | orchestrator | Friday 17 April 2026 03:46:51 +0000 (0:00:00.172) 0:01:16.368 ********** 2026-04-17 03:46:52.745087 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'vg_name': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}) 2026-04-17 03:46:52.745102 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'vg_name': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}) 2026-04-17 03:46:52.745108 | orchestrator | 2026-04-17 03:46:52.745134 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-17 03:46:52.745145 | orchestrator | Friday 17 April 2026 03:46:52 +0000 (0:00:00.182) 0:01:16.551 ********** 2026-04-17 03:46:52.745175 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.745187 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.745198 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.745210 | orchestrator | 2026-04-17 03:46:52.745228 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-17 03:46:52.745240 | orchestrator | Friday 17 April 2026 03:46:52 +0000 (0:00:00.163) 0:01:16.715 ********** 2026-04-17 03:46:52.745252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.745263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.745274 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.745285 | orchestrator | 2026-04-17 03:46:52.745297 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-17 03:46:52.745309 | orchestrator | Friday 17 April 2026 03:46:52 +0000 (0:00:00.165) 0:01:16.880 ********** 2026-04-17 03:46:52.745321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 03:46:52.745333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 03:46:52.745344 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:46:52.745355 | orchestrator | 2026-04-17 03:46:52.745366 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-17 03:46:52.745378 | orchestrator | Friday 17 April 2026 03:46:52 +0000 (0:00:00.173) 0:01:17.054 ********** 2026-04-17 03:46:52.745390 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 03:46:52.745401 | orchestrator |  "lvm_report": { 2026-04-17 03:46:52.745413 | orchestrator |  "lv": [ 2026-04-17 03:46:52.745425 | orchestrator |  { 2026-04-17 03:46:52.745437 | orchestrator |  "lv_name": "osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d", 2026-04-17 03:46:52.745449 | orchestrator |  "vg_name": "ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d" 2026-04-17 03:46:52.745461 | orchestrator |  }, 2026-04-17 03:46:52.745473 | orchestrator |  { 2026-04-17 03:46:52.745485 | orchestrator |  "lv_name": "osd-block-690571ed-11b8-555e-b420-011f2882a19f", 2026-04-17 03:46:52.745496 | orchestrator |  "vg_name": "ceph-690571ed-11b8-555e-b420-011f2882a19f" 2026-04-17 03:46:52.745508 | orchestrator |  } 2026-04-17 03:46:52.745520 | orchestrator |  ], 2026-04-17 03:46:52.745531 | orchestrator |  "pv": [ 2026-04-17 03:46:52.745543 | orchestrator |  { 2026-04-17 03:46:52.745554 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-17 03:46:52.745567 | orchestrator |  "vg_name": "ceph-690571ed-11b8-555e-b420-011f2882a19f" 2026-04-17 03:46:52.745578 | orchestrator |  }, 2026-04-17 03:46:52.745590 | orchestrator |  { 2026-04-17 03:46:52.745602 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-17 03:46:52.745613 | orchestrator |  "vg_name": "ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d" 2026-04-17 03:46:52.745640 | orchestrator |  } 2026-04-17 03:46:52.745651 | orchestrator |  ] 2026-04-17 03:46:52.745662 | orchestrator |  } 2026-04-17 03:46:52.745673 | orchestrator | } 2026-04-17 03:46:52.745684 | orchestrator | 2026-04-17 03:46:52.745695 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:46:52.745706 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-17 03:46:52.745717 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-17 03:46:52.745728 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-17 03:46:52.745738 | orchestrator | 2026-04-17 03:46:52.745749 | orchestrator | 2026-04-17 03:46:52.745760 | orchestrator | 2026-04-17 03:46:52.745771 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:46:52.745782 | orchestrator | Friday 17 April 2026 03:46:52 +0000 (0:00:00.149) 0:01:17.203 ********** 2026-04-17 03:46:52.745792 | orchestrator | =============================================================================== 2026-04-17 03:46:52.745803 | orchestrator | Create block VGs -------------------------------------------------------- 5.54s 2026-04-17 03:46:52.745814 | orchestrator | Create block LVs -------------------------------------------------------- 4.27s 2026-04-17 03:46:52.745825 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.84s 2026-04-17 03:46:52.745835 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.75s 2026-04-17 03:46:52.745846 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.66s 2026-04-17 03:46:52.745856 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2026-04-17 03:46:52.745867 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2026-04-17 03:46:52.745878 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2026-04-17 03:46:52.745896 | orchestrator | Add known partitions to the list of available block devices ------------- 1.33s 2026-04-17 03:46:53.255449 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.32s 2026-04-17 03:46:53.255544 | orchestrator | Print LVM report data --------------------------------------------------- 1.04s 2026-04-17 03:46:53.255556 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-04-17 03:46:53.255565 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-04-17 03:46:53.255591 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.94s 2026-04-17 03:46:53.255599 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.92s 2026-04-17 03:46:53.255607 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.85s 2026-04-17 03:46:53.255615 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-04-17 03:46:53.255623 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-04-17 03:46:53.255631 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-04-17 03:46:53.255639 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-04-17 03:47:05.694967 | orchestrator | 2026-04-17 03:47:05 | INFO  | Task 8d84ffb9-2f37-4277-8f49-5da1d7151ca2 (facts) was prepared for execution. 2026-04-17 03:47:05.695056 | orchestrator | 2026-04-17 03:47:05 | INFO  | It takes a moment until task 8d84ffb9-2f37-4277-8f49-5da1d7151ca2 (facts) has been started and output is visible here. 2026-04-17 03:47:18.847931 | orchestrator | 2026-04-17 03:47:18.848056 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-17 03:47:18.848072 | orchestrator | 2026-04-17 03:47:18.848082 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 03:47:18.848116 | orchestrator | Friday 17 April 2026 03:47:10 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-04-17 03:47:18.848126 | orchestrator | ok: [testbed-manager] 2026-04-17 03:47:18.848207 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:18.848219 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:18.848228 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:18.848237 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:18.848245 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:18.848254 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:18.848262 | orchestrator | 2026-04-17 03:47:18.848271 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 03:47:18.848280 | orchestrator | Friday 17 April 2026 03:47:11 +0000 (0:00:01.182) 0:00:01.473 ********** 2026-04-17 03:47:18.848289 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:47:18.848298 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:18.848307 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:18.848316 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:18.848324 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:18.848333 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:18.848342 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:18.848350 | orchestrator | 2026-04-17 03:47:18.848359 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 03:47:18.848367 | orchestrator | 2026-04-17 03:47:18.848376 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 03:47:18.848385 | orchestrator | Friday 17 April 2026 03:47:12 +0000 (0:00:01.350) 0:00:02.823 ********** 2026-04-17 03:47:18.848393 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:18.848402 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:18.848411 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:18.848419 | orchestrator | ok: [testbed-manager] 2026-04-17 03:47:18.848428 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:18.848436 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:18.848445 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:18.848455 | orchestrator | 2026-04-17 03:47:18.848465 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 03:47:18.848475 | orchestrator | 2026-04-17 03:47:18.848485 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 03:47:18.848495 | orchestrator | Friday 17 April 2026 03:47:17 +0000 (0:00:05.205) 0:00:08.029 ********** 2026-04-17 03:47:18.848505 | orchestrator | skipping: [testbed-manager] 2026-04-17 03:47:18.848514 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:18.848525 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:18.848534 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:18.848544 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:18.848554 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:18.848563 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:18.848573 | orchestrator | 2026-04-17 03:47:18.848583 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:47:18.848594 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:47:18.848605 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:47:18.848614 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:47:18.848624 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:47:18.848636 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:47:18.848646 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:47:18.848664 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 03:47:18.848674 | orchestrator | 2026-04-17 03:47:18.848684 | orchestrator | 2026-04-17 03:47:18.848693 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:47:18.848703 | orchestrator | Friday 17 April 2026 03:47:18 +0000 (0:00:00.579) 0:00:08.609 ********** 2026-04-17 03:47:18.848713 | orchestrator | =============================================================================== 2026-04-17 03:47:18.848738 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.21s 2026-04-17 03:47:18.848749 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.35s 2026-04-17 03:47:18.848759 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-04-17 03:47:18.848769 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-04-17 03:47:21.385641 | orchestrator | 2026-04-17 03:47:21 | INFO  | Task 10d88e31-6270-4aa5-a909-6f8475ee8bc1 (ceph) was prepared for execution. 2026-04-17 03:47:21.385741 | orchestrator | 2026-04-17 03:47:21 | INFO  | It takes a moment until task 10d88e31-6270-4aa5-a909-6f8475ee8bc1 (ceph) has been started and output is visible here. 2026-04-17 03:47:40.252183 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 03:47:40.252277 | orchestrator | 2.16.14 2026-04-17 03:47:40.252286 | orchestrator | 2026-04-17 03:47:40.252291 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-17 03:47:40.252296 | orchestrator | 2026-04-17 03:47:40.252301 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 03:47:40.252305 | orchestrator | Friday 17 April 2026 03:47:26 +0000 (0:00:00.809) 0:00:00.809 ********** 2026-04-17 03:47:40.252325 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:47:40.252331 | orchestrator | 2026-04-17 03:47:40.252336 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 03:47:40.252345 | orchestrator | Friday 17 April 2026 03:47:27 +0000 (0:00:01.209) 0:00:02.018 ********** 2026-04-17 03:47:40.252350 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:40.252355 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:40.252360 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:40.252365 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:40.252368 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:40.252372 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:40.252376 | orchestrator | 2026-04-17 03:47:40.252381 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 03:47:40.252385 | orchestrator | Friday 17 April 2026 03:47:29 +0000 (0:00:01.360) 0:00:03.379 ********** 2026-04-17 03:47:40.252390 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:40.252393 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:40.252397 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:40.252401 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:40.252405 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:40.252409 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:40.252413 | orchestrator | 2026-04-17 03:47:40.252417 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 03:47:40.252421 | orchestrator | Friday 17 April 2026 03:47:29 +0000 (0:00:00.778) 0:00:04.157 ********** 2026-04-17 03:47:40.252425 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:40.252429 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:40.252433 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:40.252437 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:40.252440 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:40.252444 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:40.252463 | orchestrator | 2026-04-17 03:47:40.252467 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 03:47:40.252471 | orchestrator | Friday 17 April 2026 03:47:30 +0000 (0:00:00.977) 0:00:05.135 ********** 2026-04-17 03:47:40.252475 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:40.252479 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:40.252483 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:40.252487 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:40.252490 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:40.252494 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:40.252498 | orchestrator | 2026-04-17 03:47:40.252502 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 03:47:40.252506 | orchestrator | Friday 17 April 2026 03:47:31 +0000 (0:00:00.845) 0:00:05.980 ********** 2026-04-17 03:47:40.252510 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:40.252514 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:40.252518 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:40.252521 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:40.252525 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:40.252529 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:40.252533 | orchestrator | 2026-04-17 03:47:40.252537 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 03:47:40.252541 | orchestrator | Friday 17 April 2026 03:47:32 +0000 (0:00:00.660) 0:00:06.641 ********** 2026-04-17 03:47:40.252545 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:40.252548 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:40.252552 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:40.252556 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:40.252560 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:40.252564 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:40.252567 | orchestrator | 2026-04-17 03:47:40.252571 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 03:47:40.252575 | orchestrator | Friday 17 April 2026 03:47:33 +0000 (0:00:00.825) 0:00:07.467 ********** 2026-04-17 03:47:40.252579 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:40.252584 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:40.252588 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:40.252592 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:40.252596 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:40.252600 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:40.252604 | orchestrator | 2026-04-17 03:47:40.252608 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 03:47:40.252612 | orchestrator | Friday 17 April 2026 03:47:33 +0000 (0:00:00.709) 0:00:08.177 ********** 2026-04-17 03:47:40.252616 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:40.252619 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:40.252623 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:40.252627 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:40.252631 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:40.252635 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:40.252639 | orchestrator | 2026-04-17 03:47:40.252642 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 03:47:40.252655 | orchestrator | Friday 17 April 2026 03:47:34 +0000 (0:00:00.971) 0:00:09.148 ********** 2026-04-17 03:47:40.252660 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 03:47:40.252664 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 03:47:40.252668 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 03:47:40.252672 | orchestrator | 2026-04-17 03:47:40.252676 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 03:47:40.252680 | orchestrator | Friday 17 April 2026 03:47:35 +0000 (0:00:00.675) 0:00:09.824 ********** 2026-04-17 03:47:40.252683 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:40.252687 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:40.252697 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:40.252710 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:40.252714 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:40.252718 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:40.252722 | orchestrator | 2026-04-17 03:47:40.252726 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 03:47:40.252730 | orchestrator | Friday 17 April 2026 03:47:36 +0000 (0:00:00.746) 0:00:10.571 ********** 2026-04-17 03:47:40.252735 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 03:47:40.252739 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 03:47:40.252744 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 03:47:40.252748 | orchestrator | 2026-04-17 03:47:40.252753 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 03:47:40.252757 | orchestrator | Friday 17 April 2026 03:47:38 +0000 (0:00:02.400) 0:00:12.972 ********** 2026-04-17 03:47:40.252762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 03:47:40.252766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 03:47:40.252771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 03:47:40.252776 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:40.252780 | orchestrator | 2026-04-17 03:47:40.252784 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 03:47:40.252789 | orchestrator | Friday 17 April 2026 03:47:39 +0000 (0:00:00.440) 0:00:13.412 ********** 2026-04-17 03:47:40.252795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 03:47:40.252801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 03:47:40.252806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 03:47:40.252810 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:40.252815 | orchestrator | 2026-04-17 03:47:40.252820 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 03:47:40.252824 | orchestrator | Friday 17 April 2026 03:47:39 +0000 (0:00:00.634) 0:00:14.046 ********** 2026-04-17 03:47:40.252830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:40.252837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:40.252842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:40.252850 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:40.252855 | orchestrator | 2026-04-17 03:47:40.252860 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 03:47:40.252864 | orchestrator | Friday 17 April 2026 03:47:40 +0000 (0:00:00.184) 0:00:14.231 ********** 2026-04-17 03:47:40.252878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 03:47:37.282316', 'end': '2026-04-17 03:47:37.325059', 'delta': '0:00:00.042743', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 03:47:50.895803 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 03:47:37.797131', 'end': '2026-04-17 03:47:37.846409', 'delta': '0:00:00.049278', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 03:47:50.895946 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 03:47:38.322051', 'end': '2026-04-17 03:47:38.365253', 'delta': '0:00:00.043202', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 03:47:50.895965 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.895979 | orchestrator | 2026-04-17 03:47:50.895992 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 03:47:50.896005 | orchestrator | Friday 17 April 2026 03:47:40 +0000 (0:00:00.183) 0:00:14.414 ********** 2026-04-17 03:47:50.896016 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:47:50.896027 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:47:50.896038 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:47:50.896048 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:47:50.896059 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:47:50.896069 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:47:50.896080 | orchestrator | 2026-04-17 03:47:50.896091 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 03:47:50.896102 | orchestrator | Friday 17 April 2026 03:47:41 +0000 (0:00:00.928) 0:00:15.343 ********** 2026-04-17 03:47:50.896113 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:47:50.896124 | orchestrator | 2026-04-17 03:47:50.896135 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 03:47:50.896145 | orchestrator | Friday 17 April 2026 03:47:42 +0000 (0:00:01.163) 0:00:16.507 ********** 2026-04-17 03:47:50.896156 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.896294 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.896347 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.896361 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.896374 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.896387 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.896399 | orchestrator | 2026-04-17 03:47:50.896411 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 03:47:50.896424 | orchestrator | Friday 17 April 2026 03:47:42 +0000 (0:00:00.639) 0:00:17.147 ********** 2026-04-17 03:47:50.896437 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.896449 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.896461 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.896473 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.896485 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.896498 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.896509 | orchestrator | 2026-04-17 03:47:50.896521 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 03:47:50.896533 | orchestrator | Friday 17 April 2026 03:47:44 +0000 (0:00:01.322) 0:00:18.469 ********** 2026-04-17 03:47:50.896547 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.896566 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.896584 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.896601 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.896619 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.896639 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.896659 | orchestrator | 2026-04-17 03:47:50.896679 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 03:47:50.896719 | orchestrator | Friday 17 April 2026 03:47:44 +0000 (0:00:00.632) 0:00:19.102 ********** 2026-04-17 03:47:50.896739 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.896751 | orchestrator | 2026-04-17 03:47:50.896761 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 03:47:50.896772 | orchestrator | Friday 17 April 2026 03:47:45 +0000 (0:00:00.134) 0:00:19.236 ********** 2026-04-17 03:47:50.896783 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.896886 | orchestrator | 2026-04-17 03:47:50.896992 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 03:47:50.897018 | orchestrator | Friday 17 April 2026 03:47:45 +0000 (0:00:00.241) 0:00:19.478 ********** 2026-04-17 03:47:50.897037 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.897054 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.897073 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.897084 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.897095 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.897105 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.897117 | orchestrator | 2026-04-17 03:47:50.897185 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 03:47:50.897206 | orchestrator | Friday 17 April 2026 03:47:46 +0000 (0:00:00.829) 0:00:20.307 ********** 2026-04-17 03:47:50.897223 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.897241 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.897260 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.897279 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.897297 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.897316 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.897335 | orchestrator | 2026-04-17 03:47:50.897354 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 03:47:50.897371 | orchestrator | Friday 17 April 2026 03:47:46 +0000 (0:00:00.687) 0:00:20.995 ********** 2026-04-17 03:47:50.897387 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.897398 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.897409 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.897420 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.897430 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.897457 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.897468 | orchestrator | 2026-04-17 03:47:50.897479 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 03:47:50.897490 | orchestrator | Friday 17 April 2026 03:47:47 +0000 (0:00:00.867) 0:00:21.862 ********** 2026-04-17 03:47:50.897500 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.897511 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.897521 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.897532 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.897543 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.897553 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.897564 | orchestrator | 2026-04-17 03:47:50.897575 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 03:47:50.897585 | orchestrator | Friday 17 April 2026 03:47:48 +0000 (0:00:00.670) 0:00:22.532 ********** 2026-04-17 03:47:50.897596 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.897606 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.897617 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.897627 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.897638 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.897648 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.897659 | orchestrator | 2026-04-17 03:47:50.897669 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 03:47:50.897687 | orchestrator | Friday 17 April 2026 03:47:49 +0000 (0:00:00.799) 0:00:23.332 ********** 2026-04-17 03:47:50.897707 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.897725 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.897742 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.897759 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.897775 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.897793 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.897809 | orchestrator | 2026-04-17 03:47:50.897826 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 03:47:50.897844 | orchestrator | Friday 17 April 2026 03:47:49 +0000 (0:00:00.654) 0:00:23.987 ********** 2026-04-17 03:47:50.897862 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:50.897879 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:50.897896 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:50.897916 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:50.897935 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:50.897953 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:50.897970 | orchestrator | 2026-04-17 03:47:50.897988 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 03:47:50.898000 | orchestrator | Friday 17 April 2026 03:47:50 +0000 (0:00:00.856) 0:00:24.843 ********** 2026-04-17 03:47:50.898014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.898113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.898152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.954893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.954997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.955015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.955030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.955043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.955057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.955070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.955128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:50.955206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.955225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:50.955239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:50.955273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:50.955319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.103476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.103583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.103623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.103638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.103652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.103665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.103720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.103733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.103746 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:51.103780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.103796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.103812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.103841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.103862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.244248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.244349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.244493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.244507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.244520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.533526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.533607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.533618 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:51.533648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.533749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.533756 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:51.533762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.533780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.799526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:51.799539 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:51.799549 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:51.799560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:51.799641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:47:52.265285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:52.265388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-36-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:47:52.265403 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:47:52.265415 | orchestrator | 2026-04-17 03:47:52.265425 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 03:47:52.265435 | orchestrator | Friday 17 April 2026 03:47:51 +0000 (0:00:01.112) 0:00:25.956 ********** 2026-04-17 03:47:52.265446 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.265477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.265507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.265519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.265534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.265544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.265553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.265568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.277871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.277959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.277985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.277996 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.278005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.278080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.278150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.278216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.278226 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.278249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334388 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334425 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334478 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.334500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.619918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620014 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620028 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620035 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:47:52.620043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620084 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620092 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620099 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:47:52.620105 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620117 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620136 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.620182 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683575 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683581 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683611 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683633 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683638 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683661 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.683694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.795801 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.795926 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.795943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.795973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.795981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.795997 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.796004 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.796010 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.796016 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:52.796027 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060739 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060840 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060850 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060875 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060890 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060904 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:47:53.060912 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:47:53.060919 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:47:53.060925 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060932 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060939 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060945 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060952 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:47:53.060966 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:48:00.156717 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:48:00.156800 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:48:00.156809 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:48:00.156854 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-36-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:48:00.156861 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:00.156867 | orchestrator | 2026-04-17 03:48:00.156872 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 03:48:00.156877 | orchestrator | Friday 17 April 2026 03:47:53 +0000 (0:00:01.272) 0:00:27.228 ********** 2026-04-17 03:48:00.156881 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:00.156886 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:00.156890 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:00.156893 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:00.156897 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:00.156901 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:00.156904 | orchestrator | 2026-04-17 03:48:00.156908 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 03:48:00.156912 | orchestrator | Friday 17 April 2026 03:47:54 +0000 (0:00:00.958) 0:00:28.186 ********** 2026-04-17 03:48:00.156916 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:00.156920 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:00.156923 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:00.156927 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:00.156931 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:00.156934 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:00.156938 | orchestrator | 2026-04-17 03:48:00.156955 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 03:48:00.156960 | orchestrator | Friday 17 April 2026 03:47:54 +0000 (0:00:00.813) 0:00:29.000 ********** 2026-04-17 03:48:00.156964 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:00.156967 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:00.156977 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:00.156981 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:00.156991 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:00.156995 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:00.157001 | orchestrator | 2026-04-17 03:48:00.157007 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 03:48:00.157015 | orchestrator | Friday 17 April 2026 03:47:55 +0000 (0:00:00.638) 0:00:29.639 ********** 2026-04-17 03:48:00.157019 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:00.157023 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:00.157027 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:00.157030 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:00.157034 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:00.157038 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:00.157041 | orchestrator | 2026-04-17 03:48:00.157045 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 03:48:00.157049 | orchestrator | Friday 17 April 2026 03:47:56 +0000 (0:00:00.833) 0:00:30.473 ********** 2026-04-17 03:48:00.157053 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:00.157057 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:00.157060 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:00.157064 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:00.157068 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:00.157077 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:00.157081 | orchestrator | 2026-04-17 03:48:00.157085 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 03:48:00.157089 | orchestrator | Friday 17 April 2026 03:47:56 +0000 (0:00:00.632) 0:00:31.105 ********** 2026-04-17 03:48:00.157092 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:00.157096 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:00.157100 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:00.157104 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:00.157107 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:00.157111 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:00.157115 | orchestrator | 2026-04-17 03:48:00.157119 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 03:48:00.157122 | orchestrator | Friday 17 April 2026 03:47:57 +0000 (0:00:00.865) 0:00:31.970 ********** 2026-04-17 03:48:00.157188 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-17 03:48:00.157194 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-17 03:48:00.157198 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-17 03:48:00.157202 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-17 03:48:00.157206 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 03:48:00.157210 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-17 03:48:00.157213 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-17 03:48:00.157217 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 03:48:00.157221 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-17 03:48:00.157225 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-17 03:48:00.157228 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-17 03:48:00.157232 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-17 03:48:00.157236 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 03:48:00.157240 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-17 03:48:00.157243 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-17 03:48:00.157247 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 03:48:00.157251 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 03:48:00.157255 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-17 03:48:00.157258 | orchestrator | 2026-04-17 03:48:00.157266 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 03:48:00.157270 | orchestrator | Friday 17 April 2026 03:47:59 +0000 (0:00:01.846) 0:00:33.817 ********** 2026-04-17 03:48:00.157274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 03:48:00.157278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 03:48:00.157282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 03:48:00.157285 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:00.157293 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 03:48:15.289900 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 03:48:15.289986 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 03:48:15.289997 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:15.290005 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 03:48:15.290057 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 03:48:15.290065 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 03:48:15.290070 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:15.290093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 03:48:15.290109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 03:48:15.290117 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 03:48:15.290149 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:15.290158 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 03:48:15.290179 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 03:48:15.290185 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 03:48:15.290190 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:15.290195 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 03:48:15.290200 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 03:48:15.290205 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 03:48:15.290210 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:15.290215 | orchestrator | 2026-04-17 03:48:15.290220 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 03:48:15.290227 | orchestrator | Friday 17 April 2026 03:48:00 +0000 (0:00:01.022) 0:00:34.840 ********** 2026-04-17 03:48:15.290232 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:15.290237 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:15.290241 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:15.290247 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:48:15.290252 | orchestrator | 2026-04-17 03:48:15.290257 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 03:48:15.290263 | orchestrator | Friday 17 April 2026 03:48:01 +0000 (0:00:01.158) 0:00:35.998 ********** 2026-04-17 03:48:15.290267 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:15.290272 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:15.290277 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:15.290282 | orchestrator | 2026-04-17 03:48:15.290287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 03:48:15.290292 | orchestrator | Friday 17 April 2026 03:48:02 +0000 (0:00:00.353) 0:00:36.351 ********** 2026-04-17 03:48:15.290297 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:15.290302 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:15.290306 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:15.290311 | orchestrator | 2026-04-17 03:48:15.290316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 03:48:15.290320 | orchestrator | Friday 17 April 2026 03:48:02 +0000 (0:00:00.350) 0:00:36.702 ********** 2026-04-17 03:48:15.290329 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:15.290336 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:15.290344 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:15.290351 | orchestrator | 2026-04-17 03:48:15.290358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 03:48:15.290363 | orchestrator | Friday 17 April 2026 03:48:03 +0000 (0:00:00.510) 0:00:37.213 ********** 2026-04-17 03:48:15.290368 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:15.290373 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:15.290378 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:15.290383 | orchestrator | 2026-04-17 03:48:15.290388 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 03:48:15.290393 | orchestrator | Friday 17 April 2026 03:48:03 +0000 (0:00:00.479) 0:00:37.692 ********** 2026-04-17 03:48:15.290397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:48:15.290402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:48:15.290407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:48:15.290413 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:15.290418 | orchestrator | 2026-04-17 03:48:15.290423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 03:48:15.290429 | orchestrator | Friday 17 April 2026 03:48:03 +0000 (0:00:00.389) 0:00:38.082 ********** 2026-04-17 03:48:15.290434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:48:15.290446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:48:15.290452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:48:15.290457 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:15.290462 | orchestrator | 2026-04-17 03:48:15.290468 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 03:48:15.290474 | orchestrator | Friday 17 April 2026 03:48:04 +0000 (0:00:00.407) 0:00:38.490 ********** 2026-04-17 03:48:15.290479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:48:15.290484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:48:15.290502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:48:15.290507 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:15.290513 | orchestrator | 2026-04-17 03:48:15.290518 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 03:48:15.290524 | orchestrator | Friday 17 April 2026 03:48:04 +0000 (0:00:00.424) 0:00:38.914 ********** 2026-04-17 03:48:15.290529 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:15.290535 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:15.290540 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:15.290545 | orchestrator | 2026-04-17 03:48:15.290564 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 03:48:15.290570 | orchestrator | Friday 17 April 2026 03:48:05 +0000 (0:00:00.358) 0:00:39.273 ********** 2026-04-17 03:48:15.290575 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 03:48:15.290581 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 03:48:15.290586 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 03:48:15.290591 | orchestrator | 2026-04-17 03:48:15.290597 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 03:48:15.290602 | orchestrator | Friday 17 April 2026 03:48:06 +0000 (0:00:01.078) 0:00:40.351 ********** 2026-04-17 03:48:15.290608 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 03:48:15.290614 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 03:48:15.290620 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 03:48:15.290625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 03:48:15.290631 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 03:48:15.290636 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 03:48:15.290642 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 03:48:15.290647 | orchestrator | 2026-04-17 03:48:15.290652 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 03:48:15.290657 | orchestrator | Friday 17 April 2026 03:48:07 +0000 (0:00:00.835) 0:00:41.186 ********** 2026-04-17 03:48:15.290662 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 03:48:15.290668 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 03:48:15.290673 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 03:48:15.290679 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 03:48:15.290684 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 03:48:15.290689 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 03:48:15.290695 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 03:48:15.290700 | orchestrator | 2026-04-17 03:48:15.290706 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 03:48:15.290711 | orchestrator | Friday 17 April 2026 03:48:08 +0000 (0:00:01.961) 0:00:43.148 ********** 2026-04-17 03:48:15.290722 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:48:15.290729 | orchestrator | 2026-04-17 03:48:15.290735 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 03:48:15.290741 | orchestrator | Friday 17 April 2026 03:48:10 +0000 (0:00:01.271) 0:00:44.420 ********** 2026-04-17 03:48:15.290746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:48:15.290751 | orchestrator | 2026-04-17 03:48:15.290756 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 03:48:15.290760 | orchestrator | Friday 17 April 2026 03:48:11 +0000 (0:00:01.344) 0:00:45.764 ********** 2026-04-17 03:48:15.290765 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:15.290770 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:15.290774 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:15.290779 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:15.290784 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:15.290789 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:15.290794 | orchestrator | 2026-04-17 03:48:15.290798 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 03:48:15.290803 | orchestrator | Friday 17 April 2026 03:48:12 +0000 (0:00:01.225) 0:00:46.990 ********** 2026-04-17 03:48:15.290808 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:15.290812 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:15.290817 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:15.290822 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:15.290827 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:15.290831 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:15.290836 | orchestrator | 2026-04-17 03:48:15.290841 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 03:48:15.290845 | orchestrator | Friday 17 April 2026 03:48:13 +0000 (0:00:00.718) 0:00:47.708 ********** 2026-04-17 03:48:15.290850 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:15.290855 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:15.290860 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:15.290864 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:15.290869 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:15.290874 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:15.290878 | orchestrator | 2026-04-17 03:48:15.290883 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 03:48:15.290893 | orchestrator | Friday 17 April 2026 03:48:14 +0000 (0:00:01.031) 0:00:48.739 ********** 2026-04-17 03:48:15.290898 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:15.290902 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:15.290907 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:15.290912 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:15.290916 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:15.290921 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:15.290926 | orchestrator | 2026-04-17 03:48:15.290931 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 03:48:15.290939 | orchestrator | Friday 17 April 2026 03:48:15 +0000 (0:00:00.716) 0:00:49.456 ********** 2026-04-17 03:48:37.160736 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.160825 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.160832 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.160837 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:37.160842 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:37.160847 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:37.160851 | orchestrator | 2026-04-17 03:48:37.160856 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 03:48:37.160862 | orchestrator | Friday 17 April 2026 03:48:16 +0000 (0:00:01.306) 0:00:50.762 ********** 2026-04-17 03:48:37.160881 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.160885 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.160889 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.160893 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.160897 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.160900 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.160904 | orchestrator | 2026-04-17 03:48:37.160908 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 03:48:37.160912 | orchestrator | Friday 17 April 2026 03:48:17 +0000 (0:00:00.614) 0:00:51.376 ********** 2026-04-17 03:48:37.160916 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.160920 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.160924 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.160928 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.160931 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.160935 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.160939 | orchestrator | 2026-04-17 03:48:37.160942 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 03:48:37.160946 | orchestrator | Friday 17 April 2026 03:48:18 +0000 (0:00:00.918) 0:00:52.294 ********** 2026-04-17 03:48:37.160950 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:37.160954 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:37.160957 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:37.160961 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:37.160965 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:37.160969 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:37.160972 | orchestrator | 2026-04-17 03:48:37.160976 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 03:48:37.160980 | orchestrator | Friday 17 April 2026 03:48:19 +0000 (0:00:01.020) 0:00:53.315 ********** 2026-04-17 03:48:37.160983 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:37.160987 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:37.160991 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:37.160995 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:37.160998 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:37.161002 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:37.161056 | orchestrator | 2026-04-17 03:48:37.161062 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 03:48:37.161067 | orchestrator | Friday 17 April 2026 03:48:20 +0000 (0:00:01.369) 0:00:54.685 ********** 2026-04-17 03:48:37.161070 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.161074 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.161078 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.161082 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.161086 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.161090 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.161093 | orchestrator | 2026-04-17 03:48:37.161098 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 03:48:37.161101 | orchestrator | Friday 17 April 2026 03:48:21 +0000 (0:00:00.604) 0:00:55.289 ********** 2026-04-17 03:48:37.161105 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.161109 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.161112 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.161116 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:37.161120 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:37.161124 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:37.161127 | orchestrator | 2026-04-17 03:48:37.161131 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 03:48:37.161135 | orchestrator | Friday 17 April 2026 03:48:22 +0000 (0:00:00.913) 0:00:56.203 ********** 2026-04-17 03:48:37.161138 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:37.161142 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:37.161146 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:37.161149 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.161158 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.161162 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.161165 | orchestrator | 2026-04-17 03:48:37.161169 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 03:48:37.161173 | orchestrator | Friday 17 April 2026 03:48:22 +0000 (0:00:00.625) 0:00:56.828 ********** 2026-04-17 03:48:37.161177 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:37.161181 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:37.161184 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:37.161188 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.161192 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.161195 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.161199 | orchestrator | 2026-04-17 03:48:37.161203 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 03:48:37.161207 | orchestrator | Friday 17 April 2026 03:48:23 +0000 (0:00:00.895) 0:00:57.724 ********** 2026-04-17 03:48:37.161210 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:37.161214 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:37.161218 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:37.161222 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.161225 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.161229 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.161233 | orchestrator | 2026-04-17 03:48:37.161237 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 03:48:37.161251 | orchestrator | Friday 17 April 2026 03:48:24 +0000 (0:00:00.597) 0:00:58.322 ********** 2026-04-17 03:48:37.161255 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.161258 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.161262 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.161266 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.161270 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.161273 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.161277 | orchestrator | 2026-04-17 03:48:37.161281 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 03:48:37.161296 | orchestrator | Friday 17 April 2026 03:48:25 +0000 (0:00:00.908) 0:00:59.230 ********** 2026-04-17 03:48:37.161300 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.161305 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.161309 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.161314 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.161318 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.161322 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.161326 | orchestrator | 2026-04-17 03:48:37.161331 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 03:48:37.161335 | orchestrator | Friday 17 April 2026 03:48:25 +0000 (0:00:00.607) 0:00:59.837 ********** 2026-04-17 03:48:37.161340 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.161344 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.161348 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.161352 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:37.161357 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:37.161361 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:37.161365 | orchestrator | 2026-04-17 03:48:37.161370 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 03:48:37.161374 | orchestrator | Friday 17 April 2026 03:48:26 +0000 (0:00:00.930) 0:01:00.767 ********** 2026-04-17 03:48:37.161378 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:37.161382 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:37.161387 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:37.161391 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:37.161395 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:37.161399 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:37.161404 | orchestrator | 2026-04-17 03:48:37.161408 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 03:48:37.161416 | orchestrator | Friday 17 April 2026 03:48:27 +0000 (0:00:00.932) 0:01:01.700 ********** 2026-04-17 03:48:37.161420 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:48:37.161424 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:48:37.161429 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:48:37.161433 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:48:37.161437 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:48:37.161441 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:48:37.161445 | orchestrator | 2026-04-17 03:48:37.161450 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 03:48:37.161454 | orchestrator | Friday 17 April 2026 03:48:28 +0000 (0:00:01.477) 0:01:03.177 ********** 2026-04-17 03:48:37.161459 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:48:37.161463 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:48:37.161467 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:48:37.161472 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:48:37.161476 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:48:37.161480 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:48:37.161484 | orchestrator | 2026-04-17 03:48:37.161489 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 03:48:37.161493 | orchestrator | Friday 17 April 2026 03:48:30 +0000 (0:00:01.662) 0:01:04.839 ********** 2026-04-17 03:48:37.161498 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:48:37.161502 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:48:37.161506 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:48:37.161511 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:48:37.161515 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:48:37.161520 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:48:37.161524 | orchestrator | 2026-04-17 03:48:37.161528 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 03:48:37.161531 | orchestrator | Friday 17 April 2026 03:48:32 +0000 (0:00:02.283) 0:01:07.123 ********** 2026-04-17 03:48:37.161536 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:48:37.161542 | orchestrator | 2026-04-17 03:48:37.161545 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 03:48:37.161549 | orchestrator | Friday 17 April 2026 03:48:34 +0000 (0:00:01.368) 0:01:08.492 ********** 2026-04-17 03:48:37.161553 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.161556 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.161560 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.161564 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.161568 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.161571 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.161575 | orchestrator | 2026-04-17 03:48:37.161579 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 03:48:37.161583 | orchestrator | Friday 17 April 2026 03:48:34 +0000 (0:00:00.651) 0:01:09.143 ********** 2026-04-17 03:48:37.161586 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:48:37.161590 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:48:37.161594 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:48:37.161597 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:48:37.161601 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:48:37.161605 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:48:37.161609 | orchestrator | 2026-04-17 03:48:37.161612 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 03:48:37.161616 | orchestrator | Friday 17 April 2026 03:48:35 +0000 (0:00:00.847) 0:01:09.990 ********** 2026-04-17 03:48:37.161620 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 03:48:37.161624 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 03:48:37.161627 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 03:48:37.161637 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 03:48:37.161641 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 03:48:37.161645 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 03:48:37.161650 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 03:48:37.161653 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 03:48:37.161660 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 03:49:55.255526 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 03:49:55.255639 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 03:49:55.255656 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 03:49:55.255667 | orchestrator | 2026-04-17 03:49:55.255678 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 03:49:55.255689 | orchestrator | Friday 17 April 2026 03:48:37 +0000 (0:00:01.332) 0:01:11.323 ********** 2026-04-17 03:49:55.255699 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:49:55.255710 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:49:55.255720 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:49:55.255729 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:49:55.255771 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:49:55.255811 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:49:55.255822 | orchestrator | 2026-04-17 03:49:55.255833 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 03:49:55.255842 | orchestrator | Friday 17 April 2026 03:48:38 +0000 (0:00:01.255) 0:01:12.579 ********** 2026-04-17 03:49:55.255852 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.255862 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.255871 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.255881 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.255890 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.255900 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.255909 | orchestrator | 2026-04-17 03:49:55.255919 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 03:49:55.255929 | orchestrator | Friday 17 April 2026 03:48:39 +0000 (0:00:00.658) 0:01:13.237 ********** 2026-04-17 03:49:55.255939 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.255949 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.255958 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.255974 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.255990 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.256006 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.256030 | orchestrator | 2026-04-17 03:49:55.256050 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 03:49:55.256066 | orchestrator | Friday 17 April 2026 03:48:40 +0000 (0:00:00.956) 0:01:14.194 ********** 2026-04-17 03:49:55.256083 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.256099 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.256115 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.256131 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.256147 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.256162 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.256178 | orchestrator | 2026-04-17 03:49:55.256194 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 03:49:55.256210 | orchestrator | Friday 17 April 2026 03:48:40 +0000 (0:00:00.626) 0:01:14.820 ********** 2026-04-17 03:49:55.256228 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:49:55.256277 | orchestrator | 2026-04-17 03:49:55.256296 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 03:49:55.256313 | orchestrator | Friday 17 April 2026 03:48:41 +0000 (0:00:01.153) 0:01:15.973 ********** 2026-04-17 03:49:55.256330 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:49:55.256348 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:49:55.256364 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:49:55.256379 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:49:55.256389 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:49:55.256398 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:49:55.256407 | orchestrator | 2026-04-17 03:49:55.256417 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 03:49:55.256427 | orchestrator | Friday 17 April 2026 03:49:44 +0000 (0:01:02.316) 0:02:18.289 ********** 2026-04-17 03:49:55.256436 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 03:49:55.256446 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 03:49:55.256455 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 03:49:55.256465 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.256474 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 03:49:55.256484 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 03:49:55.256493 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 03:49:55.256502 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.256512 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 03:49:55.256521 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 03:49:55.256530 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 03:49:55.256540 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.256563 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 03:49:55.256573 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 03:49:55.256582 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 03:49:55.256592 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.256601 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 03:49:55.256611 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 03:49:55.256641 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 03:49:55.256651 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.256661 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 03:49:55.256670 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 03:49:55.256679 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 03:49:55.256689 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.256698 | orchestrator | 2026-04-17 03:49:55.256708 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 03:49:55.256717 | orchestrator | Friday 17 April 2026 03:49:44 +0000 (0:00:00.702) 0:02:18.992 ********** 2026-04-17 03:49:55.256727 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.256736 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.256745 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.256755 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.256764 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.256774 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.256825 | orchestrator | 2026-04-17 03:49:55.256836 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 03:49:55.256857 | orchestrator | Friday 17 April 2026 03:49:45 +0000 (0:00:00.866) 0:02:19.858 ********** 2026-04-17 03:49:55.256867 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.256876 | orchestrator | 2026-04-17 03:49:55.256886 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 03:49:55.256896 | orchestrator | Friday 17 April 2026 03:49:45 +0000 (0:00:00.170) 0:02:20.029 ********** 2026-04-17 03:49:55.256905 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.256914 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.256924 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.256933 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.256943 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.256952 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.256961 | orchestrator | 2026-04-17 03:49:55.256971 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 03:49:55.256980 | orchestrator | Friday 17 April 2026 03:49:46 +0000 (0:00:00.633) 0:02:20.662 ********** 2026-04-17 03:49:55.256990 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.256999 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.257008 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.257018 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.257027 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.257037 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.257046 | orchestrator | 2026-04-17 03:49:55.257055 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 03:49:55.257065 | orchestrator | Friday 17 April 2026 03:49:47 +0000 (0:00:00.873) 0:02:21.536 ********** 2026-04-17 03:49:55.257074 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.257084 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.257093 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.257102 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.257112 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.257121 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.257131 | orchestrator | 2026-04-17 03:49:55.257140 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 03:49:55.257150 | orchestrator | Friday 17 April 2026 03:49:47 +0000 (0:00:00.638) 0:02:22.175 ********** 2026-04-17 03:49:55.257159 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:49:55.257169 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:49:55.257178 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:49:55.257188 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:49:55.257197 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:49:55.257207 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:49:55.257216 | orchestrator | 2026-04-17 03:49:55.257226 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 03:49:55.257235 | orchestrator | Friday 17 April 2026 03:49:51 +0000 (0:00:03.415) 0:02:25.590 ********** 2026-04-17 03:49:55.257245 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:49:55.257254 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:49:55.257263 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:49:55.257272 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:49:55.257285 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:49:55.257302 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:49:55.257318 | orchestrator | 2026-04-17 03:49:55.257335 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 03:49:55.257351 | orchestrator | Friday 17 April 2026 03:49:52 +0000 (0:00:00.645) 0:02:26.235 ********** 2026-04-17 03:49:55.257369 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:49:55.257386 | orchestrator | 2026-04-17 03:49:55.257403 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 03:49:55.257420 | orchestrator | Friday 17 April 2026 03:49:53 +0000 (0:00:01.277) 0:02:27.513 ********** 2026-04-17 03:49:55.257449 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.257466 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.257483 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.257501 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.257518 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.257535 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.257553 | orchestrator | 2026-04-17 03:49:55.257579 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 03:49:55.257598 | orchestrator | Friday 17 April 2026 03:49:54 +0000 (0:00:00.854) 0:02:28.368 ********** 2026-04-17 03:49:55.257614 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:49:55.257630 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:49:55.257648 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:49:55.257663 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:49:55.257673 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:49:55.257683 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:49:55.257692 | orchestrator | 2026-04-17 03:49:55.257702 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 03:49:55.257711 | orchestrator | Friday 17 April 2026 03:49:54 +0000 (0:00:00.625) 0:02:28.993 ********** 2026-04-17 03:49:55.257730 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:08.355143 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:08.355273 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:08.355291 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:08.355301 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:08.355312 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:08.355322 | orchestrator | 2026-04-17 03:50:08.355334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 03:50:08.355347 | orchestrator | Friday 17 April 2026 03:49:55 +0000 (0:00:00.883) 0:02:29.876 ********** 2026-04-17 03:50:08.355357 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:08.355367 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:08.355377 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:08.355387 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:08.355396 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:08.355406 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:08.355416 | orchestrator | 2026-04-17 03:50:08.355426 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 03:50:08.355436 | orchestrator | Friday 17 April 2026 03:49:56 +0000 (0:00:00.626) 0:02:30.503 ********** 2026-04-17 03:50:08.355446 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:08.355455 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:08.355464 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:08.355473 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:08.355482 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:08.355492 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:08.355501 | orchestrator | 2026-04-17 03:50:08.355511 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 03:50:08.355521 | orchestrator | Friday 17 April 2026 03:49:57 +0000 (0:00:00.845) 0:02:31.348 ********** 2026-04-17 03:50:08.355532 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:08.355541 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:08.355550 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:08.355560 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:08.355570 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:08.355580 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:08.355591 | orchestrator | 2026-04-17 03:50:08.355602 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 03:50:08.355611 | orchestrator | Friday 17 April 2026 03:49:57 +0000 (0:00:00.640) 0:02:31.988 ********** 2026-04-17 03:50:08.355622 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:08.355631 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:08.355668 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:08.355681 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:08.355691 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:08.355701 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:08.355711 | orchestrator | 2026-04-17 03:50:08.355721 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 03:50:08.355732 | orchestrator | Friday 17 April 2026 03:49:58 +0000 (0:00:00.930) 0:02:32.919 ********** 2026-04-17 03:50:08.355742 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:08.355778 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:08.355788 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:08.355798 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:08.355809 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:08.355819 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:08.355830 | orchestrator | 2026-04-17 03:50:08.355841 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 03:50:08.355853 | orchestrator | Friday 17 April 2026 03:49:59 +0000 (0:00:00.871) 0:02:33.790 ********** 2026-04-17 03:50:08.355862 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:08.355876 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:08.355888 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:08.355897 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:50:08.355907 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:50:08.355916 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:50:08.355926 | orchestrator | 2026-04-17 03:50:08.355937 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 03:50:08.355947 | orchestrator | Friday 17 April 2026 03:50:00 +0000 (0:00:01.288) 0:02:35.079 ********** 2026-04-17 03:50:08.355960 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:50:08.355971 | orchestrator | 2026-04-17 03:50:08.355981 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 03:50:08.355990 | orchestrator | Friday 17 April 2026 03:50:02 +0000 (0:00:01.319) 0:02:36.399 ********** 2026-04-17 03:50:08.356000 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-17 03:50:08.356010 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-17 03:50:08.356020 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-17 03:50:08.356030 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-17 03:50:08.356040 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-17 03:50:08.356050 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-17 03:50:08.356061 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-17 03:50:08.356070 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-17 03:50:08.356081 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-17 03:50:08.356109 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-17 03:50:08.356119 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-17 03:50:08.356128 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-17 03:50:08.356138 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-17 03:50:08.356148 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-17 03:50:08.356158 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-17 03:50:08.356167 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-17 03:50:08.356177 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-17 03:50:08.356210 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-17 03:50:08.356222 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-17 03:50:08.356247 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-17 03:50:08.356261 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-17 03:50:08.356280 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-17 03:50:08.356286 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-17 03:50:08.356292 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-17 03:50:08.356299 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-17 03:50:08.356305 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-17 03:50:08.356311 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-17 03:50:08.356317 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-17 03:50:08.356323 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-17 03:50:08.356330 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-17 03:50:08.356336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-17 03:50:08.356342 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-17 03:50:08.356348 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-17 03:50:08.356354 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-17 03:50:08.356361 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-17 03:50:08.356367 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-17 03:50:08.356373 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-17 03:50:08.356379 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-17 03:50:08.356386 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-17 03:50:08.356392 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-17 03:50:08.356398 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-17 03:50:08.356404 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-17 03:50:08.356411 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-17 03:50:08.356417 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-17 03:50:08.356423 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-17 03:50:08.356429 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-17 03:50:08.356435 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 03:50:08.356441 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-17 03:50:08.356447 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 03:50:08.356455 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-17 03:50:08.356464 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 03:50:08.356475 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 03:50:08.356485 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 03:50:08.356498 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 03:50:08.356514 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 03:50:08.356523 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 03:50:08.356533 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 03:50:08.356543 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 03:50:08.356553 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 03:50:08.356563 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 03:50:08.356572 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 03:50:08.356581 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 03:50:08.356589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 03:50:08.356606 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 03:50:08.356617 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 03:50:08.356627 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 03:50:08.356637 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 03:50:08.356646 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 03:50:08.356665 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 03:50:08.356676 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 03:50:08.356686 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 03:50:08.356696 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 03:50:08.356707 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 03:50:08.356716 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 03:50:08.356722 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 03:50:08.356737 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 03:50:22.534499 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 03:50:22.534696 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 03:50:22.534773 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 03:50:22.534787 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 03:50:22.534799 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 03:50:22.535757 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 03:50:22.535818 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-17 03:50:22.535840 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-17 03:50:22.535856 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-17 03:50:22.535863 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-17 03:50:22.535869 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 03:50:22.535877 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 03:50:22.535883 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-17 03:50:22.535890 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-17 03:50:22.535896 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-17 03:50:22.535902 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-17 03:50:22.535908 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-17 03:50:22.535914 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-17 03:50:22.535921 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-17 03:50:22.535927 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-17 03:50:22.535934 | orchestrator | 2026-04-17 03:50:22.535942 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 03:50:22.535949 | orchestrator | Friday 17 April 2026 03:50:08 +0000 (0:00:06.064) 0:02:42.464 ********** 2026-04-17 03:50:22.535957 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.535963 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.535970 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.535977 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:50:22.535985 | orchestrator | 2026-04-17 03:50:22.535991 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 03:50:22.536019 | orchestrator | Friday 17 April 2026 03:50:09 +0000 (0:00:01.068) 0:02:43.532 ********** 2026-04-17 03:50:22.536026 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:22.536045 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:22.536061 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:22.536065 | orchestrator | 2026-04-17 03:50:22.536075 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 03:50:22.536079 | orchestrator | Friday 17 April 2026 03:50:10 +0000 (0:00:00.713) 0:02:44.246 ********** 2026-04-17 03:50:22.536084 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:22.536088 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:22.536092 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:22.536096 | orchestrator | 2026-04-17 03:50:22.536099 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 03:50:22.536103 | orchestrator | Friday 17 April 2026 03:50:11 +0000 (0:00:01.168) 0:02:45.415 ********** 2026-04-17 03:50:22.536107 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:22.536111 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:22.536115 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:22.536119 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536123 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536127 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536131 | orchestrator | 2026-04-17 03:50:22.536135 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 03:50:22.536139 | orchestrator | Friday 17 April 2026 03:50:12 +0000 (0:00:00.820) 0:02:46.236 ********** 2026-04-17 03:50:22.536143 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:22.536157 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:22.536161 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:22.536165 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536170 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536174 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536181 | orchestrator | 2026-04-17 03:50:22.536188 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 03:50:22.536194 | orchestrator | Friday 17 April 2026 03:50:12 +0000 (0:00:00.588) 0:02:46.825 ********** 2026-04-17 03:50:22.536200 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:22.536207 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:22.536213 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:22.536221 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536229 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536236 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536244 | orchestrator | 2026-04-17 03:50:22.536274 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 03:50:22.536280 | orchestrator | Friday 17 April 2026 03:50:13 +0000 (0:00:00.829) 0:02:47.654 ********** 2026-04-17 03:50:22.536284 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:22.536289 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:22.536293 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:22.536298 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536302 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536306 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536310 | orchestrator | 2026-04-17 03:50:22.536315 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 03:50:22.536327 | orchestrator | Friday 17 April 2026 03:50:14 +0000 (0:00:00.601) 0:02:48.256 ********** 2026-04-17 03:50:22.536331 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:22.536335 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:22.536340 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:22.536344 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536348 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536352 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536357 | orchestrator | 2026-04-17 03:50:22.536361 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 03:50:22.536366 | orchestrator | Friday 17 April 2026 03:50:14 +0000 (0:00:00.838) 0:02:49.095 ********** 2026-04-17 03:50:22.536370 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:22.536375 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:22.536379 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:22.536384 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536388 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536392 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536397 | orchestrator | 2026-04-17 03:50:22.536401 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 03:50:22.536406 | orchestrator | Friday 17 April 2026 03:50:15 +0000 (0:00:00.637) 0:02:49.732 ********** 2026-04-17 03:50:22.536410 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:22.536414 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:22.536418 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:22.536422 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536427 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536431 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536435 | orchestrator | 2026-04-17 03:50:22.536439 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 03:50:22.536444 | orchestrator | Friday 17 April 2026 03:50:16 +0000 (0:00:00.868) 0:02:50.600 ********** 2026-04-17 03:50:22.536448 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:22.536452 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:22.536457 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:22.536461 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536465 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536470 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536474 | orchestrator | 2026-04-17 03:50:22.536479 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 03:50:22.536483 | orchestrator | Friday 17 April 2026 03:50:17 +0000 (0:00:00.593) 0:02:51.194 ********** 2026-04-17 03:50:22.536487 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536491 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536496 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536500 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:22.536505 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:22.536509 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:22.536513 | orchestrator | 2026-04-17 03:50:22.536518 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 03:50:22.536522 | orchestrator | Friday 17 April 2026 03:50:19 +0000 (0:00:02.832) 0:02:54.026 ********** 2026-04-17 03:50:22.536526 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:22.536531 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:22.536535 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:22.536539 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536543 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536548 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536552 | orchestrator | 2026-04-17 03:50:22.536556 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 03:50:22.536561 | orchestrator | Friday 17 April 2026 03:50:20 +0000 (0:00:00.598) 0:02:54.625 ********** 2026-04-17 03:50:22.536569 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:22.536573 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:22.536577 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:22.536581 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536586 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536590 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536594 | orchestrator | 2026-04-17 03:50:22.536598 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 03:50:22.536603 | orchestrator | Friday 17 April 2026 03:50:21 +0000 (0:00:00.909) 0:02:55.534 ********** 2026-04-17 03:50:22.536607 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:22.536612 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:22.536616 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:22.536620 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:22.536624 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:22.536632 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:22.536637 | orchestrator | 2026-04-17 03:50:22.536641 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 03:50:22.536645 | orchestrator | Friday 17 April 2026 03:50:22 +0000 (0:00:00.852) 0:02:56.387 ********** 2026-04-17 03:50:22.536650 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:22.536654 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:22.536662 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 03:50:36.657422 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.657547 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.657561 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.657567 | orchestrator | 2026-04-17 03:50:36.657575 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 03:50:36.657583 | orchestrator | Friday 17 April 2026 03:50:22 +0000 (0:00:00.637) 0:02:57.025 ********** 2026-04-17 03:50:36.657588 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-17 03:50:36.657596 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-17 03:50:36.657601 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-17 03:50:36.657605 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-17 03:50:36.657609 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.657613 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-17 03:50:36.657617 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-17 03:50:36.657638 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:36.657642 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:36.657645 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.657649 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.657653 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.657657 | orchestrator | 2026-04-17 03:50:36.657661 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 03:50:36.657664 | orchestrator | Friday 17 April 2026 03:50:23 +0000 (0:00:00.874) 0:02:57.899 ********** 2026-04-17 03:50:36.657668 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.657703 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:36.657708 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:36.657712 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.657716 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.657719 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.657723 | orchestrator | 2026-04-17 03:50:36.657727 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 03:50:36.657731 | orchestrator | Friday 17 April 2026 03:50:24 +0000 (0:00:00.613) 0:02:58.512 ********** 2026-04-17 03:50:36.657734 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.657738 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:36.657742 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:36.657745 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.657749 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.657753 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.657756 | orchestrator | 2026-04-17 03:50:36.657761 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 03:50:36.657766 | orchestrator | Friday 17 April 2026 03:50:25 +0000 (0:00:00.810) 0:02:59.323 ********** 2026-04-17 03:50:36.657770 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.657774 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:36.657788 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:36.657792 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.657796 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.657799 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.657803 | orchestrator | 2026-04-17 03:50:36.657807 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 03:50:36.657811 | orchestrator | Friday 17 April 2026 03:50:25 +0000 (0:00:00.657) 0:02:59.980 ********** 2026-04-17 03:50:36.657815 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.657818 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:36.657822 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:36.657826 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.657830 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.657833 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.657837 | orchestrator | 2026-04-17 03:50:36.657853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 03:50:36.657857 | orchestrator | Friday 17 April 2026 03:50:26 +0000 (0:00:00.844) 0:03:00.824 ********** 2026-04-17 03:50:36.657861 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.657865 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:36.657868 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:36.657872 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.657876 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.657879 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.657883 | orchestrator | 2026-04-17 03:50:36.657887 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 03:50:36.657896 | orchestrator | Friday 17 April 2026 03:50:27 +0000 (0:00:00.667) 0:03:01.492 ********** 2026-04-17 03:50:36.657900 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:36.657905 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:36.657908 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:36.657912 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.657916 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.657919 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.657923 | orchestrator | 2026-04-17 03:50:36.657927 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 03:50:36.657930 | orchestrator | Friday 17 April 2026 03:50:28 +0000 (0:00:00.872) 0:03:02.364 ********** 2026-04-17 03:50:36.657934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:50:36.657938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:50:36.657942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:50:36.657946 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.657950 | orchestrator | 2026-04-17 03:50:36.657954 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 03:50:36.657958 | orchestrator | Friday 17 April 2026 03:50:28 +0000 (0:00:00.419) 0:03:02.784 ********** 2026-04-17 03:50:36.657963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:50:36.657967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:50:36.657972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:50:36.657976 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.657981 | orchestrator | 2026-04-17 03:50:36.657985 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 03:50:36.657989 | orchestrator | Friday 17 April 2026 03:50:29 +0000 (0:00:00.412) 0:03:03.196 ********** 2026-04-17 03:50:36.657994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:50:36.657998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:50:36.658003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:50:36.658007 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.658011 | orchestrator | 2026-04-17 03:50:36.658053 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 03:50:36.658058 | orchestrator | Friday 17 April 2026 03:50:29 +0000 (0:00:00.426) 0:03:03.623 ********** 2026-04-17 03:50:36.658062 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:36.658066 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:36.658070 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:36.658075 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.658079 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.658083 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.658088 | orchestrator | 2026-04-17 03:50:36.658092 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 03:50:36.658096 | orchestrator | Friday 17 April 2026 03:50:30 +0000 (0:00:00.621) 0:03:04.244 ********** 2026-04-17 03:50:36.658100 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 03:50:36.658105 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 03:50:36.658109 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 03:50:36.658114 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-17 03:50:36.658118 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:36.658122 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-17 03:50:36.658127 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:36.658131 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-17 03:50:36.658135 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:36.658140 | orchestrator | 2026-04-17 03:50:36.658144 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 03:50:36.658148 | orchestrator | Friday 17 April 2026 03:50:31 +0000 (0:00:01.840) 0:03:06.085 ********** 2026-04-17 03:50:36.658159 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:50:36.658172 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:50:36.658177 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:50:36.658181 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:50:36.658185 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:50:36.658190 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:50:36.658194 | orchestrator | 2026-04-17 03:50:36.658198 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 03:50:36.658203 | orchestrator | Friday 17 April 2026 03:50:34 +0000 (0:00:02.562) 0:03:08.648 ********** 2026-04-17 03:50:36.658207 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:50:36.658211 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:50:36.658215 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:50:36.658220 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:50:36.658227 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:50:36.658232 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:50:36.658236 | orchestrator | 2026-04-17 03:50:36.658241 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 03:50:36.658247 | orchestrator | Friday 17 April 2026 03:50:35 +0000 (0:00:00.993) 0:03:09.641 ********** 2026-04-17 03:50:36.658254 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:36.658263 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:36.658268 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:36.658274 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:50:36.658281 | orchestrator | 2026-04-17 03:50:36.658286 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-17 03:50:36.658298 | orchestrator | Friday 17 April 2026 03:50:36 +0000 (0:00:01.180) 0:03:10.821 ********** 2026-04-17 03:50:54.114969 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:50:54.115072 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:50:54.115083 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:50:54.115091 | orchestrator | 2026-04-17 03:50:54.115100 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-17 03:50:54.115108 | orchestrator | Friday 17 April 2026 03:50:36 +0000 (0:00:00.352) 0:03:11.173 ********** 2026-04-17 03:50:54.115116 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:50:54.115125 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:50:54.115132 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:50:54.115140 | orchestrator | 2026-04-17 03:50:54.115147 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-17 03:50:54.115155 | orchestrator | Friday 17 April 2026 03:50:38 +0000 (0:00:01.463) 0:03:12.637 ********** 2026-04-17 03:50:54.115162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 03:50:54.115170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 03:50:54.115177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 03:50:54.115185 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:54.115192 | orchestrator | 2026-04-17 03:50:54.115199 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-17 03:50:54.115207 | orchestrator | Friday 17 April 2026 03:50:39 +0000 (0:00:00.728) 0:03:13.365 ********** 2026-04-17 03:50:54.115214 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:50:54.115222 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:50:54.115230 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:50:54.115237 | orchestrator | 2026-04-17 03:50:54.115245 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 03:50:54.115252 | orchestrator | Friday 17 April 2026 03:50:39 +0000 (0:00:00.373) 0:03:13.739 ********** 2026-04-17 03:50:54.115259 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:54.115278 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:54.115285 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:54.115293 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:50:54.115323 | orchestrator | 2026-04-17 03:50:54.115331 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-17 03:50:54.115338 | orchestrator | Friday 17 April 2026 03:50:40 +0000 (0:00:01.210) 0:03:14.949 ********** 2026-04-17 03:50:54.115346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:50:54.115353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:50:54.115360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:50:54.115367 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115375 | orchestrator | 2026-04-17 03:50:54.115382 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-17 03:50:54.115389 | orchestrator | Friday 17 April 2026 03:50:41 +0000 (0:00:00.456) 0:03:15.406 ********** 2026-04-17 03:50:54.115397 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115404 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:54.115411 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:54.115418 | orchestrator | 2026-04-17 03:50:54.115425 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-17 03:50:54.115433 | orchestrator | Friday 17 April 2026 03:50:41 +0000 (0:00:00.353) 0:03:15.760 ********** 2026-04-17 03:50:54.115440 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115447 | orchestrator | 2026-04-17 03:50:54.115454 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-17 03:50:54.115462 | orchestrator | Friday 17 April 2026 03:50:41 +0000 (0:00:00.229) 0:03:15.989 ********** 2026-04-17 03:50:54.115469 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115476 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:54.115483 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:54.115491 | orchestrator | 2026-04-17 03:50:54.115498 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-17 03:50:54.115505 | orchestrator | Friday 17 April 2026 03:50:42 +0000 (0:00:00.611) 0:03:16.600 ********** 2026-04-17 03:50:54.115514 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115522 | orchestrator | 2026-04-17 03:50:54.115530 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-17 03:50:54.115538 | orchestrator | Friday 17 April 2026 03:50:42 +0000 (0:00:00.248) 0:03:16.849 ********** 2026-04-17 03:50:54.115547 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115555 | orchestrator | 2026-04-17 03:50:54.115563 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-17 03:50:54.115571 | orchestrator | Friday 17 April 2026 03:50:42 +0000 (0:00:00.286) 0:03:17.136 ********** 2026-04-17 03:50:54.115580 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115588 | orchestrator | 2026-04-17 03:50:54.115596 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-17 03:50:54.115605 | orchestrator | Friday 17 April 2026 03:50:43 +0000 (0:00:00.162) 0:03:17.298 ********** 2026-04-17 03:50:54.115613 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115622 | orchestrator | 2026-04-17 03:50:54.115710 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-17 03:50:54.115741 | orchestrator | Friday 17 April 2026 03:50:43 +0000 (0:00:00.247) 0:03:17.546 ********** 2026-04-17 03:50:54.115753 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115764 | orchestrator | 2026-04-17 03:50:54.115775 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-17 03:50:54.115786 | orchestrator | Friday 17 April 2026 03:50:43 +0000 (0:00:00.248) 0:03:17.795 ********** 2026-04-17 03:50:54.115794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:50:54.115801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:50:54.115808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:50:54.115815 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115822 | orchestrator | 2026-04-17 03:50:54.115830 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-17 03:50:54.115861 | orchestrator | Friday 17 April 2026 03:50:44 +0000 (0:00:00.438) 0:03:18.234 ********** 2026-04-17 03:50:54.115869 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115876 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:54.115884 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:54.115891 | orchestrator | 2026-04-17 03:50:54.115898 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-17 03:50:54.115905 | orchestrator | Friday 17 April 2026 03:50:44 +0000 (0:00:00.342) 0:03:18.577 ********** 2026-04-17 03:50:54.115912 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115920 | orchestrator | 2026-04-17 03:50:54.115927 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-17 03:50:54.115934 | orchestrator | Friday 17 April 2026 03:50:44 +0000 (0:00:00.255) 0:03:18.832 ********** 2026-04-17 03:50:54.115941 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.115948 | orchestrator | 2026-04-17 03:50:54.115955 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 03:50:54.115962 | orchestrator | Friday 17 April 2026 03:50:45 +0000 (0:00:00.878) 0:03:19.711 ********** 2026-04-17 03:50:54.115970 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:54.115977 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:54.115984 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:54.115991 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:50:54.115999 | orchestrator | 2026-04-17 03:50:54.116006 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-17 03:50:54.116013 | orchestrator | Friday 17 April 2026 03:50:46 +0000 (0:00:00.876) 0:03:20.587 ********** 2026-04-17 03:50:54.116020 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:54.116027 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:54.116034 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:54.116042 | orchestrator | 2026-04-17 03:50:54.116049 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-17 03:50:54.116056 | orchestrator | Friday 17 April 2026 03:50:46 +0000 (0:00:00.579) 0:03:21.166 ********** 2026-04-17 03:50:54.116063 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:50:54.116071 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:50:54.116078 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:50:54.116085 | orchestrator | 2026-04-17 03:50:54.116092 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-17 03:50:54.116099 | orchestrator | Friday 17 April 2026 03:50:48 +0000 (0:00:01.258) 0:03:22.425 ********** 2026-04-17 03:50:54.116107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:50:54.116114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:50:54.116121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:50:54.116128 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.116135 | orchestrator | 2026-04-17 03:50:54.116142 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-17 03:50:54.116149 | orchestrator | Friday 17 April 2026 03:50:48 +0000 (0:00:00.684) 0:03:23.109 ********** 2026-04-17 03:50:54.116157 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:54.116164 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:54.116171 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:54.116178 | orchestrator | 2026-04-17 03:50:54.116185 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 03:50:54.116193 | orchestrator | Friday 17 April 2026 03:50:49 +0000 (0:00:00.364) 0:03:23.474 ********** 2026-04-17 03:50:54.116200 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:54.116207 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:54.116214 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:50:54.116222 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:50:54.116234 | orchestrator | 2026-04-17 03:50:54.116241 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-17 03:50:54.116247 | orchestrator | Friday 17 April 2026 03:50:50 +0000 (0:00:01.109) 0:03:24.584 ********** 2026-04-17 03:50:54.116254 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:54.116261 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:54.116267 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:54.116274 | orchestrator | 2026-04-17 03:50:54.116281 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-17 03:50:54.116287 | orchestrator | Friday 17 April 2026 03:50:50 +0000 (0:00:00.352) 0:03:24.936 ********** 2026-04-17 03:50:54.116294 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:50:54.116300 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:50:54.116307 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:50:54.116314 | orchestrator | 2026-04-17 03:50:54.116320 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-17 03:50:54.116327 | orchestrator | Friday 17 April 2026 03:50:51 +0000 (0:00:01.215) 0:03:26.152 ********** 2026-04-17 03:50:54.116333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:50:54.116340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:50:54.116346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:50:54.116353 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.116360 | orchestrator | 2026-04-17 03:50:54.116371 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-17 03:50:54.116378 | orchestrator | Friday 17 April 2026 03:50:53 +0000 (0:00:01.128) 0:03:27.281 ********** 2026-04-17 03:50:54.116384 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:50:54.116391 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:50:54.116397 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:50:54.116404 | orchestrator | 2026-04-17 03:50:54.116411 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-17 03:50:54.116417 | orchestrator | Friday 17 April 2026 03:50:53 +0000 (0:00:00.366) 0:03:27.647 ********** 2026-04-17 03:50:54.116424 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:50:54.116431 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:50:54.116437 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:50:54.116444 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:50:54.116450 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:50:54.116462 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.792090 | orchestrator | 2026-04-17 03:51:10.792208 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 03:51:10.792228 | orchestrator | Friday 17 April 2026 03:50:54 +0000 (0:00:00.633) 0:03:28.281 ********** 2026-04-17 03:51:10.792240 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:51:10.792253 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:51:10.792293 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:51:10.792306 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:51:10.792318 | orchestrator | 2026-04-17 03:51:10.792329 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-17 03:51:10.792340 | orchestrator | Friday 17 April 2026 03:50:55 +0000 (0:00:01.168) 0:03:29.449 ********** 2026-04-17 03:51:10.792350 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.792362 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.792372 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.792382 | orchestrator | 2026-04-17 03:51:10.792393 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-17 03:51:10.792404 | orchestrator | Friday 17 April 2026 03:50:55 +0000 (0:00:00.353) 0:03:29.803 ********** 2026-04-17 03:51:10.792416 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:51:10.792426 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:51:10.792438 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:51:10.792473 | orchestrator | 2026-04-17 03:51:10.792485 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-17 03:51:10.792492 | orchestrator | Friday 17 April 2026 03:50:57 +0000 (0:00:01.500) 0:03:31.303 ********** 2026-04-17 03:51:10.792499 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 03:51:10.792507 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 03:51:10.792513 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 03:51:10.792520 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.792527 | orchestrator | 2026-04-17 03:51:10.792533 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-17 03:51:10.792540 | orchestrator | Friday 17 April 2026 03:50:57 +0000 (0:00:00.685) 0:03:31.988 ********** 2026-04-17 03:51:10.792547 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.792553 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.792560 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.792566 | orchestrator | 2026-04-17 03:51:10.792573 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-17 03:51:10.792579 | orchestrator | 2026-04-17 03:51:10.792586 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 03:51:10.792615 | orchestrator | Friday 17 April 2026 03:50:58 +0000 (0:00:00.677) 0:03:32.666 ********** 2026-04-17 03:51:10.792625 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:51:10.792635 | orchestrator | 2026-04-17 03:51:10.792642 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 03:51:10.792650 | orchestrator | Friday 17 April 2026 03:50:59 +0000 (0:00:00.880) 0:03:33.547 ********** 2026-04-17 03:51:10.792658 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:51:10.792666 | orchestrator | 2026-04-17 03:51:10.792673 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 03:51:10.792680 | orchestrator | Friday 17 April 2026 03:50:59 +0000 (0:00:00.579) 0:03:34.127 ********** 2026-04-17 03:51:10.792688 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.792696 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.792703 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.792711 | orchestrator | 2026-04-17 03:51:10.792718 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 03:51:10.792726 | orchestrator | Friday 17 April 2026 03:51:00 +0000 (0:00:00.740) 0:03:34.868 ********** 2026-04-17 03:51:10.792733 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.792741 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.792748 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.792755 | orchestrator | 2026-04-17 03:51:10.792763 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 03:51:10.792770 | orchestrator | Friday 17 April 2026 03:51:01 +0000 (0:00:00.559) 0:03:35.427 ********** 2026-04-17 03:51:10.792778 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.792785 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.792793 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.792800 | orchestrator | 2026-04-17 03:51:10.792807 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 03:51:10.792815 | orchestrator | Friday 17 April 2026 03:51:01 +0000 (0:00:00.320) 0:03:35.748 ********** 2026-04-17 03:51:10.792822 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.792830 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.792837 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.792845 | orchestrator | 2026-04-17 03:51:10.792852 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 03:51:10.792873 | orchestrator | Friday 17 April 2026 03:51:01 +0000 (0:00:00.317) 0:03:36.066 ********** 2026-04-17 03:51:10.792881 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.792898 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.792906 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.792913 | orchestrator | 2026-04-17 03:51:10.792921 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 03:51:10.792928 | orchestrator | Friday 17 April 2026 03:51:02 +0000 (0:00:00.707) 0:03:36.774 ********** 2026-04-17 03:51:10.792935 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.792943 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.792950 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.792958 | orchestrator | 2026-04-17 03:51:10.792966 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 03:51:10.792973 | orchestrator | Friday 17 April 2026 03:51:03 +0000 (0:00:00.589) 0:03:37.363 ********** 2026-04-17 03:51:10.792981 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.793004 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.793011 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.793018 | orchestrator | 2026-04-17 03:51:10.793024 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 03:51:10.793030 | orchestrator | Friday 17 April 2026 03:51:03 +0000 (0:00:00.329) 0:03:37.693 ********** 2026-04-17 03:51:10.793037 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.793043 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.793050 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.793056 | orchestrator | 2026-04-17 03:51:10.793063 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 03:51:10.793069 | orchestrator | Friday 17 April 2026 03:51:04 +0000 (0:00:00.722) 0:03:38.415 ********** 2026-04-17 03:51:10.793077 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.793087 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.793098 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.793109 | orchestrator | 2026-04-17 03:51:10.793119 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 03:51:10.793129 | orchestrator | Friday 17 April 2026 03:51:04 +0000 (0:00:00.719) 0:03:39.135 ********** 2026-04-17 03:51:10.793138 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.793149 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.793160 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.793171 | orchestrator | 2026-04-17 03:51:10.793181 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 03:51:10.793192 | orchestrator | Friday 17 April 2026 03:51:05 +0000 (0:00:00.600) 0:03:39.735 ********** 2026-04-17 03:51:10.793202 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.793213 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.793224 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.793236 | orchestrator | 2026-04-17 03:51:10.793248 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 03:51:10.793259 | orchestrator | Friday 17 April 2026 03:51:05 +0000 (0:00:00.354) 0:03:40.090 ********** 2026-04-17 03:51:10.793270 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.793280 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.793290 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.793300 | orchestrator | 2026-04-17 03:51:10.793311 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 03:51:10.793321 | orchestrator | Friday 17 April 2026 03:51:06 +0000 (0:00:00.318) 0:03:40.408 ********** 2026-04-17 03:51:10.793336 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.793352 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.793362 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.793372 | orchestrator | 2026-04-17 03:51:10.793382 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 03:51:10.793393 | orchestrator | Friday 17 April 2026 03:51:06 +0000 (0:00:00.579) 0:03:40.988 ********** 2026-04-17 03:51:10.793402 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.793412 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.793422 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.793441 | orchestrator | 2026-04-17 03:51:10.793451 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 03:51:10.793462 | orchestrator | Friday 17 April 2026 03:51:07 +0000 (0:00:00.343) 0:03:41.332 ********** 2026-04-17 03:51:10.793471 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.793482 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.793493 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.793503 | orchestrator | 2026-04-17 03:51:10.793515 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 03:51:10.793526 | orchestrator | Friday 17 April 2026 03:51:07 +0000 (0:00:00.330) 0:03:41.663 ********** 2026-04-17 03:51:10.793537 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.793548 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:51:10.793559 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:51:10.793566 | orchestrator | 2026-04-17 03:51:10.793572 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 03:51:10.793579 | orchestrator | Friday 17 April 2026 03:51:07 +0000 (0:00:00.391) 0:03:42.054 ********** 2026-04-17 03:51:10.793585 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.793609 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.793618 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.793625 | orchestrator | 2026-04-17 03:51:10.793631 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 03:51:10.793638 | orchestrator | Friday 17 April 2026 03:51:08 +0000 (0:00:00.611) 0:03:42.666 ********** 2026-04-17 03:51:10.793645 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.793651 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.793658 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.793664 | orchestrator | 2026-04-17 03:51:10.793671 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 03:51:10.793677 | orchestrator | Friday 17 April 2026 03:51:08 +0000 (0:00:00.389) 0:03:43.056 ********** 2026-04-17 03:51:10.793684 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.793690 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.793697 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.793703 | orchestrator | 2026-04-17 03:51:10.793710 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-17 03:51:10.793717 | orchestrator | Friday 17 April 2026 03:51:09 +0000 (0:00:00.553) 0:03:43.610 ********** 2026-04-17 03:51:10.793731 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:51:10.793738 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:51:10.793744 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:51:10.793751 | orchestrator | 2026-04-17 03:51:10.793757 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-17 03:51:10.793764 | orchestrator | Friday 17 April 2026 03:51:10 +0000 (0:00:00.592) 0:03:44.202 ********** 2026-04-17 03:51:10.793771 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:51:10.793778 | orchestrator | 2026-04-17 03:51:10.793784 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-17 03:51:10.793791 | orchestrator | Friday 17 April 2026 03:51:10 +0000 (0:00:00.610) 0:03:44.813 ********** 2026-04-17 03:51:10.793797 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:51:10.793804 | orchestrator | 2026-04-17 03:51:10.793820 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-17 03:52:22.612917 | orchestrator | Friday 17 April 2026 03:51:10 +0000 (0:00:00.146) 0:03:44.959 ********** 2026-04-17 03:52:22.613066 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 03:52:22.613094 | orchestrator | 2026-04-17 03:52:22.613114 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-17 03:52:22.613135 | orchestrator | Friday 17 April 2026 03:51:11 +0000 (0:00:01.031) 0:03:45.991 ********** 2026-04-17 03:52:22.613154 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:22.613172 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:22.613191 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:22.613247 | orchestrator | 2026-04-17 03:52:22.613285 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-17 03:52:22.613304 | orchestrator | Friday 17 April 2026 03:51:12 +0000 (0:00:00.616) 0:03:46.608 ********** 2026-04-17 03:52:22.613323 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:22.613342 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:22.613360 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:22.613379 | orchestrator | 2026-04-17 03:52:22.613398 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-17 03:52:22.613417 | orchestrator | Friday 17 April 2026 03:51:12 +0000 (0:00:00.365) 0:03:46.974 ********** 2026-04-17 03:52:22.613436 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.613483 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.613503 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.613523 | orchestrator | 2026-04-17 03:52:22.613544 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-17 03:52:22.613564 | orchestrator | Friday 17 April 2026 03:51:13 +0000 (0:00:01.192) 0:03:48.166 ********** 2026-04-17 03:52:22.613582 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.613602 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.613623 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.613642 | orchestrator | 2026-04-17 03:52:22.613662 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-17 03:52:22.613680 | orchestrator | Friday 17 April 2026 03:51:14 +0000 (0:00:00.771) 0:03:48.938 ********** 2026-04-17 03:52:22.613700 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.613720 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.613739 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.613758 | orchestrator | 2026-04-17 03:52:22.613771 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-17 03:52:22.613784 | orchestrator | Friday 17 April 2026 03:51:15 +0000 (0:00:01.053) 0:03:49.991 ********** 2026-04-17 03:52:22.613795 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:22.613806 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:22.613817 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:22.613828 | orchestrator | 2026-04-17 03:52:22.613838 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-17 03:52:22.613849 | orchestrator | Friday 17 April 2026 03:51:16 +0000 (0:00:00.678) 0:03:50.670 ********** 2026-04-17 03:52:22.613860 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.613871 | orchestrator | 2026-04-17 03:52:22.613882 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-17 03:52:22.613893 | orchestrator | Friday 17 April 2026 03:51:17 +0000 (0:00:01.262) 0:03:51.933 ********** 2026-04-17 03:52:22.613903 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:22.613914 | orchestrator | 2026-04-17 03:52:22.613925 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-17 03:52:22.613936 | orchestrator | Friday 17 April 2026 03:51:18 +0000 (0:00:00.731) 0:03:52.664 ********** 2026-04-17 03:52:22.613947 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 03:52:22.613958 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:52:22.613969 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:52:22.613979 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 03:52:22.613991 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-17 03:52:22.614005 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 03:52:22.614094 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 03:52:22.614116 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-17 03:52:22.614135 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 03:52:22.614152 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-17 03:52:22.614199 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-17 03:52:22.614219 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-17 03:52:22.614231 | orchestrator | 2026-04-17 03:52:22.614242 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-17 03:52:22.614253 | orchestrator | Friday 17 April 2026 03:51:21 +0000 (0:00:03.133) 0:03:55.798 ********** 2026-04-17 03:52:22.614264 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.614274 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.614285 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.614296 | orchestrator | 2026-04-17 03:52:22.614306 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-17 03:52:22.614335 | orchestrator | Friday 17 April 2026 03:51:22 +0000 (0:00:01.143) 0:03:56.942 ********** 2026-04-17 03:52:22.614347 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:22.614358 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:22.614368 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:22.614379 | orchestrator | 2026-04-17 03:52:22.614397 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-17 03:52:22.614423 | orchestrator | Friday 17 April 2026 03:51:23 +0000 (0:00:00.627) 0:03:57.569 ********** 2026-04-17 03:52:22.614443 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:22.614532 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:22.614548 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:22.614563 | orchestrator | 2026-04-17 03:52:22.614580 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-17 03:52:22.614597 | orchestrator | Friday 17 April 2026 03:51:23 +0000 (0:00:00.338) 0:03:57.908 ********** 2026-04-17 03:52:22.614614 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.614631 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.614673 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.614691 | orchestrator | 2026-04-17 03:52:22.614707 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-17 03:52:22.614722 | orchestrator | Friday 17 April 2026 03:51:25 +0000 (0:00:01.439) 0:03:59.348 ********** 2026-04-17 03:52:22.614740 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.614757 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.614774 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.614792 | orchestrator | 2026-04-17 03:52:22.614809 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-17 03:52:22.614827 | orchestrator | Friday 17 April 2026 03:51:26 +0000 (0:00:01.556) 0:04:00.904 ********** 2026-04-17 03:52:22.614846 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:22.614865 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:22.614881 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:22.614899 | orchestrator | 2026-04-17 03:52:22.614915 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-17 03:52:22.614931 | orchestrator | Friday 17 April 2026 03:51:27 +0000 (0:00:00.329) 0:04:01.234 ********** 2026-04-17 03:52:22.614948 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:52:22.614964 | orchestrator | 2026-04-17 03:52:22.614980 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-17 03:52:22.614997 | orchestrator | Friday 17 April 2026 03:51:27 +0000 (0:00:00.563) 0:04:01.797 ********** 2026-04-17 03:52:22.615015 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:22.615032 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:22.615050 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:22.615068 | orchestrator | 2026-04-17 03:52:22.615084 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-17 03:52:22.615101 | orchestrator | Friday 17 April 2026 03:51:28 +0000 (0:00:00.576) 0:04:02.374 ********** 2026-04-17 03:52:22.615119 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:22.615137 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:22.615155 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:22.615194 | orchestrator | 2026-04-17 03:52:22.615211 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-17 03:52:22.615229 | orchestrator | Friday 17 April 2026 03:51:28 +0000 (0:00:00.338) 0:04:02.713 ********** 2026-04-17 03:52:22.615247 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:52:22.615267 | orchestrator | 2026-04-17 03:52:22.615285 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-17 03:52:22.615303 | orchestrator | Friday 17 April 2026 03:51:29 +0000 (0:00:00.539) 0:04:03.252 ********** 2026-04-17 03:52:22.615322 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.615340 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.615357 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.615377 | orchestrator | 2026-04-17 03:52:22.615395 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-17 03:52:22.615413 | orchestrator | Friday 17 April 2026 03:51:31 +0000 (0:00:02.131) 0:04:05.384 ********** 2026-04-17 03:52:22.615431 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.615443 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.615493 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.615508 | orchestrator | 2026-04-17 03:52:22.615519 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-17 03:52:22.615530 | orchestrator | Friday 17 April 2026 03:51:32 +0000 (0:00:01.234) 0:04:06.619 ********** 2026-04-17 03:52:22.615541 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.615563 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.615574 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.615585 | orchestrator | 2026-04-17 03:52:22.615596 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-17 03:52:22.615607 | orchestrator | Friday 17 April 2026 03:51:34 +0000 (0:00:01.724) 0:04:08.343 ********** 2026-04-17 03:52:22.615617 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:52:22.615628 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:52:22.615639 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:52:22.615650 | orchestrator | 2026-04-17 03:52:22.615661 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-17 03:52:22.615671 | orchestrator | Friday 17 April 2026 03:51:36 +0000 (0:00:02.790) 0:04:11.134 ********** 2026-04-17 03:52:22.615682 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:52:22.615693 | orchestrator | 2026-04-17 03:52:22.615704 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-17 03:52:22.615715 | orchestrator | Friday 17 April 2026 03:51:37 +0000 (0:00:00.807) 0:04:11.942 ********** 2026-04-17 03:52:22.615725 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-17 03:52:22.615736 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:22.615748 | orchestrator | 2026-04-17 03:52:22.615769 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-17 03:52:22.615780 | orchestrator | Friday 17 April 2026 03:51:59 +0000 (0:00:21.808) 0:04:33.750 ********** 2026-04-17 03:52:22.615791 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:22.615802 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:22.615813 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:22.615824 | orchestrator | 2026-04-17 03:52:22.615834 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-17 03:52:22.615845 | orchestrator | Friday 17 April 2026 03:52:08 +0000 (0:00:08.690) 0:04:42.441 ********** 2026-04-17 03:52:22.615856 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:22.615867 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:22.615877 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:22.615888 | orchestrator | 2026-04-17 03:52:22.615899 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-17 03:52:22.615920 | orchestrator | Friday 17 April 2026 03:52:08 +0000 (0:00:00.330) 0:04:42.772 ********** 2026-04-17 03:52:22.615950 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__de35cf7747eec9a7393be5e984495fcaada33865'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-17 03:52:34.870328 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__de35cf7747eec9a7393be5e984495fcaada33865'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-17 03:52:34.870533 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__de35cf7747eec9a7393be5e984495fcaada33865'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-17 03:52:34.870559 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__de35cf7747eec9a7393be5e984495fcaada33865'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-17 03:52:34.870616 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__de35cf7747eec9a7393be5e984495fcaada33865'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-17 03:52:34.870635 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__de35cf7747eec9a7393be5e984495fcaada33865'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__de35cf7747eec9a7393be5e984495fcaada33865'}])  2026-04-17 03:52:34.870651 | orchestrator | 2026-04-17 03:52:34.870668 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 03:52:34.870681 | orchestrator | Friday 17 April 2026 03:52:22 +0000 (0:00:14.007) 0:04:56.779 ********** 2026-04-17 03:52:34.870690 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.870700 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.870708 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.870716 | orchestrator | 2026-04-17 03:52:34.870724 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 03:52:34.870732 | orchestrator | Friday 17 April 2026 03:52:22 +0000 (0:00:00.347) 0:04:57.127 ********** 2026-04-17 03:52:34.870741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:52:34.870749 | orchestrator | 2026-04-17 03:52:34.870757 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-17 03:52:34.870765 | orchestrator | Friday 17 April 2026 03:52:23 +0000 (0:00:00.800) 0:04:57.928 ********** 2026-04-17 03:52:34.870773 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:34.870781 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:34.870789 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:34.870798 | orchestrator | 2026-04-17 03:52:34.870806 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-17 03:52:34.870814 | orchestrator | Friday 17 April 2026 03:52:24 +0000 (0:00:00.342) 0:04:58.270 ********** 2026-04-17 03:52:34.870844 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.870853 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.870862 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.870871 | orchestrator | 2026-04-17 03:52:34.870893 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-17 03:52:34.870904 | orchestrator | Friday 17 April 2026 03:52:24 +0000 (0:00:00.330) 0:04:58.601 ********** 2026-04-17 03:52:34.870918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 03:52:34.870932 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 03:52:34.870945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 03:52:34.870959 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.870972 | orchestrator | 2026-04-17 03:52:34.870986 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-17 03:52:34.871000 | orchestrator | Friday 17 April 2026 03:52:25 +0000 (0:00:00.933) 0:04:59.534 ********** 2026-04-17 03:52:34.871015 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:34.871029 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:34.871042 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:34.871051 | orchestrator | 2026-04-17 03:52:34.871060 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-17 03:52:34.871069 | orchestrator | 2026-04-17 03:52:34.871078 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 03:52:34.871087 | orchestrator | Friday 17 April 2026 03:52:26 +0000 (0:00:00.853) 0:05:00.388 ********** 2026-04-17 03:52:34.871097 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:52:34.871108 | orchestrator | 2026-04-17 03:52:34.871136 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 03:52:34.871146 | orchestrator | Friday 17 April 2026 03:52:26 +0000 (0:00:00.523) 0:05:00.911 ********** 2026-04-17 03:52:34.871155 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:52:34.871164 | orchestrator | 2026-04-17 03:52:34.871173 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 03:52:34.871182 | orchestrator | Friday 17 April 2026 03:52:27 +0000 (0:00:00.820) 0:05:01.732 ********** 2026-04-17 03:52:34.871191 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:34.871199 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:34.871206 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:34.871214 | orchestrator | 2026-04-17 03:52:34.871222 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 03:52:34.871230 | orchestrator | Friday 17 April 2026 03:52:28 +0000 (0:00:00.706) 0:05:02.438 ********** 2026-04-17 03:52:34.871237 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871245 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871254 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871262 | orchestrator | 2026-04-17 03:52:34.871269 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 03:52:34.871277 | orchestrator | Friday 17 April 2026 03:52:28 +0000 (0:00:00.309) 0:05:02.747 ********** 2026-04-17 03:52:34.871285 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871293 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871322 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871330 | orchestrator | 2026-04-17 03:52:34.871338 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 03:52:34.871346 | orchestrator | Friday 17 April 2026 03:52:29 +0000 (0:00:00.585) 0:05:03.333 ********** 2026-04-17 03:52:34.871354 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871362 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871370 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871378 | orchestrator | 2026-04-17 03:52:34.871386 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 03:52:34.871405 | orchestrator | Friday 17 April 2026 03:52:29 +0000 (0:00:00.296) 0:05:03.629 ********** 2026-04-17 03:52:34.871413 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:34.871421 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:34.871473 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:34.871481 | orchestrator | 2026-04-17 03:52:34.871489 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 03:52:34.871497 | orchestrator | Friday 17 April 2026 03:52:30 +0000 (0:00:00.722) 0:05:04.352 ********** 2026-04-17 03:52:34.871505 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871513 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871521 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871529 | orchestrator | 2026-04-17 03:52:34.871537 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 03:52:34.871545 | orchestrator | Friday 17 April 2026 03:52:30 +0000 (0:00:00.328) 0:05:04.681 ********** 2026-04-17 03:52:34.871552 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871560 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871568 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871576 | orchestrator | 2026-04-17 03:52:34.871584 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 03:52:34.871592 | orchestrator | Friday 17 April 2026 03:52:31 +0000 (0:00:00.572) 0:05:05.253 ********** 2026-04-17 03:52:34.871600 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:34.871607 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:34.871615 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:34.871623 | orchestrator | 2026-04-17 03:52:34.871631 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 03:52:34.871639 | orchestrator | Friday 17 April 2026 03:52:31 +0000 (0:00:00.723) 0:05:05.977 ********** 2026-04-17 03:52:34.871647 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:34.871655 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:34.871663 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:34.871670 | orchestrator | 2026-04-17 03:52:34.871678 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 03:52:34.871686 | orchestrator | Friday 17 April 2026 03:52:32 +0000 (0:00:00.696) 0:05:06.673 ********** 2026-04-17 03:52:34.871694 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871702 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871710 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871718 | orchestrator | 2026-04-17 03:52:34.871732 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 03:52:34.871740 | orchestrator | Friday 17 April 2026 03:52:32 +0000 (0:00:00.315) 0:05:06.989 ********** 2026-04-17 03:52:34.871748 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:52:34.871756 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:52:34.871764 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:52:34.871772 | orchestrator | 2026-04-17 03:52:34.871779 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 03:52:34.871787 | orchestrator | Friday 17 April 2026 03:52:33 +0000 (0:00:00.807) 0:05:07.797 ********** 2026-04-17 03:52:34.871795 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871803 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871811 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871818 | orchestrator | 2026-04-17 03:52:34.871826 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 03:52:34.871834 | orchestrator | Friday 17 April 2026 03:52:33 +0000 (0:00:00.322) 0:05:08.119 ********** 2026-04-17 03:52:34.871842 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871850 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871857 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871865 | orchestrator | 2026-04-17 03:52:34.871873 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 03:52:34.871881 | orchestrator | Friday 17 April 2026 03:52:34 +0000 (0:00:00.303) 0:05:08.423 ********** 2026-04-17 03:52:34.871896 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:52:34.871904 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:52:34.871912 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:52:34.871920 | orchestrator | 2026-04-17 03:52:34.871934 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 03:53:28.117009 | orchestrator | Friday 17 April 2026 03:52:34 +0000 (0:00:00.612) 0:05:09.035 ********** 2026-04-17 03:53:28.117111 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:28.117125 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:53:28.117136 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:53:28.117146 | orchestrator | 2026-04-17 03:53:28.117158 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 03:53:28.117166 | orchestrator | Friday 17 April 2026 03:52:35 +0000 (0:00:00.334) 0:05:09.370 ********** 2026-04-17 03:53:28.117172 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:28.117178 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:53:28.117184 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:53:28.117190 | orchestrator | 2026-04-17 03:53:28.117196 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 03:53:28.117202 | orchestrator | Friday 17 April 2026 03:52:35 +0000 (0:00:00.350) 0:05:09.720 ********** 2026-04-17 03:53:28.117208 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:53:28.117215 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:53:28.117221 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:28.117226 | orchestrator | 2026-04-17 03:53:28.117232 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 03:53:28.117238 | orchestrator | Friday 17 April 2026 03:52:35 +0000 (0:00:00.350) 0:05:10.071 ********** 2026-04-17 03:53:28.117243 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:53:28.117249 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:53:28.117255 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:28.117260 | orchestrator | 2026-04-17 03:53:28.117266 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 03:53:28.117272 | orchestrator | Friday 17 April 2026 03:52:36 +0000 (0:00:00.536) 0:05:10.608 ********** 2026-04-17 03:53:28.117278 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:53:28.117283 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:53:28.117289 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:28.117294 | orchestrator | 2026-04-17 03:53:28.117300 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-17 03:53:28.117306 | orchestrator | Friday 17 April 2026 03:52:36 +0000 (0:00:00.494) 0:05:11.102 ********** 2026-04-17 03:53:28.117312 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 03:53:28.117318 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 03:53:28.117324 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 03:53:28.117330 | orchestrator | 2026-04-17 03:53:28.117380 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-17 03:53:28.117387 | orchestrator | Friday 17 April 2026 03:52:37 +0000 (0:00:00.755) 0:05:11.858 ********** 2026-04-17 03:53:28.117393 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:53:28.117400 | orchestrator | 2026-04-17 03:53:28.117406 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-17 03:53:28.117412 | orchestrator | Friday 17 April 2026 03:52:38 +0000 (0:00:00.620) 0:05:12.478 ********** 2026-04-17 03:53:28.117417 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:53:28.117423 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:53:28.117429 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:53:28.117435 | orchestrator | 2026-04-17 03:53:28.117444 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-17 03:53:28.117454 | orchestrator | Friday 17 April 2026 03:52:38 +0000 (0:00:00.602) 0:05:13.081 ********** 2026-04-17 03:53:28.117486 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:28.117496 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:53:28.117505 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:53:28.117515 | orchestrator | 2026-04-17 03:53:28.117524 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-17 03:53:28.117535 | orchestrator | Friday 17 April 2026 03:52:39 +0000 (0:00:00.282) 0:05:13.364 ********** 2026-04-17 03:53:28.117547 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 03:53:28.117558 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 03:53:28.117568 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 03:53:28.117578 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-17 03:53:28.117585 | orchestrator | 2026-04-17 03:53:28.117591 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-17 03:53:28.117611 | orchestrator | Friday 17 April 2026 03:52:48 +0000 (0:00:09.711) 0:05:23.076 ********** 2026-04-17 03:53:28.117618 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:53:28.117625 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:53:28.117631 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:28.117638 | orchestrator | 2026-04-17 03:53:28.117645 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-17 03:53:28.117652 | orchestrator | Friday 17 April 2026 03:52:49 +0000 (0:00:00.684) 0:05:23.760 ********** 2026-04-17 03:53:28.117658 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 03:53:28.117665 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 03:53:28.117672 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 03:53:28.117678 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 03:53:28.117685 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:53:28.117692 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:53:28.117698 | orchestrator | 2026-04-17 03:53:28.117705 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-17 03:53:28.117711 | orchestrator | Friday 17 April 2026 03:52:51 +0000 (0:00:02.047) 0:05:25.808 ********** 2026-04-17 03:53:28.117718 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 03:53:28.117724 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 03:53:28.117733 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 03:53:28.117742 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 03:53:28.117752 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-17 03:53:28.117783 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-17 03:53:28.117796 | orchestrator | 2026-04-17 03:53:28.117806 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-17 03:53:28.117815 | orchestrator | Friday 17 April 2026 03:52:52 +0000 (0:00:01.205) 0:05:27.014 ********** 2026-04-17 03:53:28.117824 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:53:28.117833 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:53:28.117842 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:28.117851 | orchestrator | 2026-04-17 03:53:28.117860 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-17 03:53:28.117870 | orchestrator | Friday 17 April 2026 03:52:53 +0000 (0:00:00.676) 0:05:27.690 ********** 2026-04-17 03:53:28.117880 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:28.117889 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:53:28.117899 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:53:28.117909 | orchestrator | 2026-04-17 03:53:28.117919 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-17 03:53:28.117928 | orchestrator | Friday 17 April 2026 03:52:54 +0000 (0:00:00.560) 0:05:28.251 ********** 2026-04-17 03:53:28.117938 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:28.117948 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:53:28.117969 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:53:28.117975 | orchestrator | 2026-04-17 03:53:28.117981 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-17 03:53:28.117987 | orchestrator | Friday 17 April 2026 03:52:54 +0000 (0:00:00.368) 0:05:28.620 ********** 2026-04-17 03:53:28.117992 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:53:28.117998 | orchestrator | 2026-04-17 03:53:28.118004 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-17 03:53:28.118010 | orchestrator | Friday 17 April 2026 03:52:54 +0000 (0:00:00.523) 0:05:29.143 ********** 2026-04-17 03:53:28.118065 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:28.118072 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:53:28.118078 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:53:28.118084 | orchestrator | 2026-04-17 03:53:28.118089 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-17 03:53:28.118095 | orchestrator | Friday 17 April 2026 03:52:55 +0000 (0:00:00.575) 0:05:29.719 ********** 2026-04-17 03:53:28.118111 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:28.118121 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:53:28.118141 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:53:28.118152 | orchestrator | 2026-04-17 03:53:28.118161 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-17 03:53:28.118170 | orchestrator | Friday 17 April 2026 03:52:55 +0000 (0:00:00.351) 0:05:30.071 ********** 2026-04-17 03:53:28.118179 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:53:28.118188 | orchestrator | 2026-04-17 03:53:28.118198 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-17 03:53:28.118207 | orchestrator | Friday 17 April 2026 03:52:56 +0000 (0:00:00.619) 0:05:30.690 ********** 2026-04-17 03:53:28.118215 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:53:28.118223 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:53:28.118233 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:53:28.118241 | orchestrator | 2026-04-17 03:53:28.118251 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-17 03:53:28.118260 | orchestrator | Friday 17 April 2026 03:52:58 +0000 (0:00:01.818) 0:05:32.508 ********** 2026-04-17 03:53:28.118269 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:53:28.118278 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:53:28.118287 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:53:28.118297 | orchestrator | 2026-04-17 03:53:28.118306 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-17 03:53:28.118316 | orchestrator | Friday 17 April 2026 03:52:59 +0000 (0:00:01.149) 0:05:33.657 ********** 2026-04-17 03:53:28.118325 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:53:28.118356 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:53:28.118364 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:53:28.118370 | orchestrator | 2026-04-17 03:53:28.118375 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-17 03:53:28.118381 | orchestrator | Friday 17 April 2026 03:53:01 +0000 (0:00:01.652) 0:05:35.310 ********** 2026-04-17 03:53:28.118387 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:53:28.118401 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:53:28.118407 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:53:28.118412 | orchestrator | 2026-04-17 03:53:28.118418 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-17 03:53:28.118424 | orchestrator | Friday 17 April 2026 03:53:02 +0000 (0:00:01.803) 0:05:37.114 ********** 2026-04-17 03:53:28.118429 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:28.118435 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:53:28.118441 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-17 03:53:28.118447 | orchestrator | 2026-04-17 03:53:28.118452 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-17 03:53:28.118465 | orchestrator | Friday 17 April 2026 03:53:03 +0000 (0:00:00.845) 0:05:37.960 ********** 2026-04-17 03:53:28.118471 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-17 03:53:28.118477 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-17 03:53:28.118482 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-17 03:53:28.118488 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-17 03:53:28.118494 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:53:28.118500 | orchestrator | 2026-04-17 03:53:28.118514 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-17 03:53:53.660058 | orchestrator | Friday 17 April 2026 03:53:28 +0000 (0:00:24.318) 0:06:02.278 ********** 2026-04-17 03:53:53.660155 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:53:53.660163 | orchestrator | 2026-04-17 03:53:53.660168 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-17 03:53:53.660173 | orchestrator | Friday 17 April 2026 03:53:29 +0000 (0:00:01.189) 0:06:03.467 ********** 2026-04-17 03:53:53.660178 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:53.660183 | orchestrator | 2026-04-17 03:53:53.660187 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-17 03:53:53.660194 | orchestrator | Friday 17 April 2026 03:53:29 +0000 (0:00:00.285) 0:06:03.753 ********** 2026-04-17 03:53:53.660200 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:53.660206 | orchestrator | 2026-04-17 03:53:53.660212 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-17 03:53:53.660219 | orchestrator | Friday 17 April 2026 03:53:29 +0000 (0:00:00.155) 0:06:03.908 ********** 2026-04-17 03:53:53.660226 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-17 03:53:53.660233 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-17 03:53:53.660239 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-17 03:53:53.660246 | orchestrator | 2026-04-17 03:53:53.660252 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-17 03:53:53.660256 | orchestrator | Friday 17 April 2026 03:53:35 +0000 (0:00:06.206) 0:06:10.115 ********** 2026-04-17 03:53:53.660261 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-17 03:53:53.660265 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-17 03:53:53.660269 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-17 03:53:53.660273 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-17 03:53:53.660277 | orchestrator | 2026-04-17 03:53:53.660281 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 03:53:53.660285 | orchestrator | Friday 17 April 2026 03:53:41 +0000 (0:00:05.102) 0:06:15.217 ********** 2026-04-17 03:53:53.660289 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:53:53.660293 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:53:53.660315 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:53:53.660319 | orchestrator | 2026-04-17 03:53:53.660323 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 03:53:53.660327 | orchestrator | Friday 17 April 2026 03:53:41 +0000 (0:00:00.604) 0:06:15.822 ********** 2026-04-17 03:53:53.660331 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:53:53.660335 | orchestrator | 2026-04-17 03:53:53.660339 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-17 03:53:53.660342 | orchestrator | Friday 17 April 2026 03:53:42 +0000 (0:00:00.695) 0:06:16.517 ********** 2026-04-17 03:53:53.660364 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:53:53.660368 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:53:53.660371 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:53.660375 | orchestrator | 2026-04-17 03:53:53.660379 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-17 03:53:53.660383 | orchestrator | Friday 17 April 2026 03:53:42 +0000 (0:00:00.323) 0:06:16.841 ********** 2026-04-17 03:53:53.660386 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:53:53.660390 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:53:53.660394 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:53:53.660398 | orchestrator | 2026-04-17 03:53:53.660401 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-17 03:53:53.660405 | orchestrator | Friday 17 April 2026 03:53:43 +0000 (0:00:01.074) 0:06:17.916 ********** 2026-04-17 03:53:53.660409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 03:53:53.660413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 03:53:53.660416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 03:53:53.660420 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:53:53.660424 | orchestrator | 2026-04-17 03:53:53.660439 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-17 03:53:53.660443 | orchestrator | Friday 17 April 2026 03:53:44 +0000 (0:00:00.765) 0:06:18.681 ********** 2026-04-17 03:53:53.660446 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:53:53.660450 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:53:53.660454 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:53:53.660458 | orchestrator | 2026-04-17 03:53:53.660461 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-17 03:53:53.660465 | orchestrator | 2026-04-17 03:53:53.660469 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 03:53:53.660473 | orchestrator | Friday 17 April 2026 03:53:45 +0000 (0:00:00.685) 0:06:19.366 ********** 2026-04-17 03:53:53.660477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:53:53.660482 | orchestrator | 2026-04-17 03:53:53.660486 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 03:53:53.660492 | orchestrator | Friday 17 April 2026 03:53:45 +0000 (0:00:00.467) 0:06:19.834 ********** 2026-04-17 03:53:53.660499 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:53:53.660506 | orchestrator | 2026-04-17 03:53:53.660515 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 03:53:53.660524 | orchestrator | Friday 17 April 2026 03:53:46 +0000 (0:00:00.679) 0:06:20.513 ********** 2026-04-17 03:53:53.660529 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:53:53.660536 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:53:53.660556 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:53:53.660562 | orchestrator | 2026-04-17 03:53:53.660567 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 03:53:53.660573 | orchestrator | Friday 17 April 2026 03:53:46 +0000 (0:00:00.358) 0:06:20.872 ********** 2026-04-17 03:53:53.660578 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:53:53.660584 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:53:53.660590 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:53:53.660595 | orchestrator | 2026-04-17 03:53:53.660601 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 03:53:53.660606 | orchestrator | Friday 17 April 2026 03:53:47 +0000 (0:00:00.656) 0:06:21.528 ********** 2026-04-17 03:53:53.660612 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:53:53.660619 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:53:53.660625 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:53:53.660631 | orchestrator | 2026-04-17 03:53:53.660637 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 03:53:53.660651 | orchestrator | Friday 17 April 2026 03:53:47 +0000 (0:00:00.636) 0:06:22.165 ********** 2026-04-17 03:53:53.660657 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:53:53.660663 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:53:53.660668 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:53:53.660674 | orchestrator | 2026-04-17 03:53:53.660681 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 03:53:53.660687 | orchestrator | Friday 17 April 2026 03:53:48 +0000 (0:00:00.866) 0:06:23.032 ********** 2026-04-17 03:53:53.660694 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:53:53.660701 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:53:53.660706 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:53:53.660710 | orchestrator | 2026-04-17 03:53:53.660715 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 03:53:53.660719 | orchestrator | Friday 17 April 2026 03:53:49 +0000 (0:00:00.293) 0:06:23.325 ********** 2026-04-17 03:53:53.660723 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:53:53.660728 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:53:53.660732 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:53:53.660736 | orchestrator | 2026-04-17 03:53:53.660741 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 03:53:53.660745 | orchestrator | Friday 17 April 2026 03:53:49 +0000 (0:00:00.261) 0:06:23.587 ********** 2026-04-17 03:53:53.660749 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:53:53.660754 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:53:53.660758 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:53:53.660762 | orchestrator | 2026-04-17 03:53:53.660766 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 03:53:53.660770 | orchestrator | Friday 17 April 2026 03:53:49 +0000 (0:00:00.274) 0:06:23.861 ********** 2026-04-17 03:53:53.660775 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:53:53.660779 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:53:53.660783 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:53:53.660788 | orchestrator | 2026-04-17 03:53:53.660792 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 03:53:53.660796 | orchestrator | Friday 17 April 2026 03:53:50 +0000 (0:00:00.845) 0:06:24.707 ********** 2026-04-17 03:53:53.660800 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:53:53.660805 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:53:53.660811 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:53:53.660818 | orchestrator | 2026-04-17 03:53:53.660824 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 03:53:53.660830 | orchestrator | Friday 17 April 2026 03:53:51 +0000 (0:00:00.668) 0:06:25.375 ********** 2026-04-17 03:53:53.660845 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:53:53.660850 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:53:53.660863 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:53:53.660869 | orchestrator | 2026-04-17 03:53:53.660875 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 03:53:53.660881 | orchestrator | Friday 17 April 2026 03:53:51 +0000 (0:00:00.264) 0:06:25.640 ********** 2026-04-17 03:53:53.660887 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:53:53.660894 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:53:53.660900 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:53:53.660906 | orchestrator | 2026-04-17 03:53:53.660912 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 03:53:53.660916 | orchestrator | Friday 17 April 2026 03:53:51 +0000 (0:00:00.288) 0:06:25.928 ********** 2026-04-17 03:53:53.660919 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:53:53.660923 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:53:53.660931 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:53:53.660935 | orchestrator | 2026-04-17 03:53:53.660939 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 03:53:53.660943 | orchestrator | Friday 17 April 2026 03:53:52 +0000 (0:00:00.549) 0:06:26.478 ********** 2026-04-17 03:53:53.660950 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:53:53.660954 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:53:53.660958 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:53:53.660962 | orchestrator | 2026-04-17 03:53:53.660965 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 03:53:53.660970 | orchestrator | Friday 17 April 2026 03:53:52 +0000 (0:00:00.293) 0:06:26.771 ********** 2026-04-17 03:53:53.660977 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:53:53.660983 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:53:53.660989 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:53:53.660994 | orchestrator | 2026-04-17 03:53:53.661001 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 03:53:53.661006 | orchestrator | Friday 17 April 2026 03:53:52 +0000 (0:00:00.277) 0:06:27.048 ********** 2026-04-17 03:53:53.661013 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:53:53.661019 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:53:53.661025 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:53:53.661032 | orchestrator | 2026-04-17 03:53:53.661038 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 03:53:53.661044 | orchestrator | Friday 17 April 2026 03:53:53 +0000 (0:00:00.275) 0:06:27.324 ********** 2026-04-17 03:53:53.661050 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:53:53.661057 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:53:53.661063 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:53:53.661068 | orchestrator | 2026-04-17 03:53:53.661077 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 03:54:46.292207 | orchestrator | Friday 17 April 2026 03:53:53 +0000 (0:00:00.503) 0:06:27.827 ********** 2026-04-17 03:54:46.292379 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:54:46.292412 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:54:46.292432 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:54:46.292451 | orchestrator | 2026-04-17 03:54:46.292470 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 03:54:46.292485 | orchestrator | Friday 17 April 2026 03:53:53 +0000 (0:00:00.284) 0:06:28.112 ********** 2026-04-17 03:54:46.292497 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:54:46.292509 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:54:46.292520 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:54:46.292531 | orchestrator | 2026-04-17 03:54:46.292542 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 03:54:46.292553 | orchestrator | Friday 17 April 2026 03:53:54 +0000 (0:00:00.294) 0:06:28.407 ********** 2026-04-17 03:54:46.292564 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:54:46.292574 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:54:46.292588 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:54:46.292606 | orchestrator | 2026-04-17 03:54:46.292632 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-17 03:54:46.292654 | orchestrator | Friday 17 April 2026 03:53:54 +0000 (0:00:00.694) 0:06:29.102 ********** 2026-04-17 03:54:46.292671 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:54:46.292690 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:54:46.292708 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:54:46.292726 | orchestrator | 2026-04-17 03:54:46.292744 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-17 03:54:46.292760 | orchestrator | Friday 17 April 2026 03:53:55 +0000 (0:00:00.285) 0:06:29.387 ********** 2026-04-17 03:54:46.292776 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 03:54:46.292796 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 03:54:46.292816 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 03:54:46.292834 | orchestrator | 2026-04-17 03:54:46.292853 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-17 03:54:46.292979 | orchestrator | Friday 17 April 2026 03:53:55 +0000 (0:00:00.778) 0:06:30.165 ********** 2026-04-17 03:54:46.293001 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:54:46.293023 | orchestrator | 2026-04-17 03:54:46.293043 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-17 03:54:46.293062 | orchestrator | Friday 17 April 2026 03:53:56 +0000 (0:00:00.641) 0:06:30.807 ********** 2026-04-17 03:54:46.293078 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:54:46.293089 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:54:46.293100 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:54:46.293111 | orchestrator | 2026-04-17 03:54:46.293121 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-17 03:54:46.293132 | orchestrator | Friday 17 April 2026 03:53:56 +0000 (0:00:00.293) 0:06:31.100 ********** 2026-04-17 03:54:46.293143 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:54:46.293153 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:54:46.293164 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:54:46.293174 | orchestrator | 2026-04-17 03:54:46.293185 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-17 03:54:46.293196 | orchestrator | Friday 17 April 2026 03:53:57 +0000 (0:00:00.267) 0:06:31.368 ********** 2026-04-17 03:54:46.293207 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:54:46.293217 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:54:46.293263 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:54:46.293274 | orchestrator | 2026-04-17 03:54:46.293285 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-17 03:54:46.293296 | orchestrator | Friday 17 April 2026 03:53:57 +0000 (0:00:00.543) 0:06:31.911 ********** 2026-04-17 03:54:46.293306 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:54:46.293317 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:54:46.293328 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:54:46.293338 | orchestrator | 2026-04-17 03:54:46.293349 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-17 03:54:46.293360 | orchestrator | Friday 17 April 2026 03:53:58 +0000 (0:00:00.497) 0:06:32.408 ********** 2026-04-17 03:54:46.293387 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 03:54:46.293399 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 03:54:46.293410 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 03:54:46.293421 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 03:54:46.293432 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 03:54:46.293444 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 03:54:46.293454 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 03:54:46.293465 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 03:54:46.293475 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 03:54:46.293488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 03:54:46.293507 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 03:54:46.293534 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 03:54:46.293581 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 03:54:46.293599 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 03:54:46.293616 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 03:54:46.293648 | orchestrator | 2026-04-17 03:54:46.293666 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-17 03:54:46.293684 | orchestrator | Friday 17 April 2026 03:54:00 +0000 (0:00:01.969) 0:06:34.377 ********** 2026-04-17 03:54:46.293703 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:54:46.293722 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:54:46.293741 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:54:46.293755 | orchestrator | 2026-04-17 03:54:46.293766 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-17 03:54:46.293777 | orchestrator | Friday 17 April 2026 03:54:00 +0000 (0:00:00.331) 0:06:34.709 ********** 2026-04-17 03:54:46.293787 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:54:46.293798 | orchestrator | 2026-04-17 03:54:46.293809 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-17 03:54:46.293819 | orchestrator | Friday 17 April 2026 03:54:01 +0000 (0:00:00.716) 0:06:35.426 ********** 2026-04-17 03:54:46.293830 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 03:54:46.293841 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 03:54:46.293852 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 03:54:46.293862 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-17 03:54:46.293873 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-17 03:54:46.293883 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-17 03:54:46.293894 | orchestrator | 2026-04-17 03:54:46.293905 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-17 03:54:46.293915 | orchestrator | Friday 17 April 2026 03:54:02 +0000 (0:00:00.942) 0:06:36.368 ********** 2026-04-17 03:54:46.293926 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:54:46.293937 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 03:54:46.293947 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 03:54:46.293958 | orchestrator | 2026-04-17 03:54:46.293968 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-17 03:54:46.293979 | orchestrator | Friday 17 April 2026 03:54:04 +0000 (0:00:02.028) 0:06:38.397 ********** 2026-04-17 03:54:46.293989 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 03:54:46.294000 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 03:54:46.294011 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:54:46.294082 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 03:54:46.294093 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 03:54:46.294103 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:54:46.294115 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 03:54:46.294133 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 03:54:46.294151 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:54:46.294168 | orchestrator | 2026-04-17 03:54:46.294200 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-17 03:54:46.294250 | orchestrator | Friday 17 April 2026 03:54:05 +0000 (0:00:01.017) 0:06:39.415 ********** 2026-04-17 03:54:46.294269 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:54:46.294286 | orchestrator | 2026-04-17 03:54:46.294298 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-17 03:54:46.294308 | orchestrator | Friday 17 April 2026 03:54:07 +0000 (0:00:01.926) 0:06:41.341 ********** 2026-04-17 03:54:46.294319 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:54:46.294330 | orchestrator | 2026-04-17 03:54:46.294341 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-17 03:54:46.294361 | orchestrator | Friday 17 April 2026 03:54:07 +0000 (0:00:00.681) 0:06:42.023 ********** 2026-04-17 03:54:46.294391 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}) 2026-04-17 03:54:46.294413 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}) 2026-04-17 03:54:46.294430 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}) 2026-04-17 03:54:46.294448 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}) 2026-04-17 03:54:46.294466 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}) 2026-04-17 03:54:46.294483 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}) 2026-04-17 03:54:46.294502 | orchestrator | 2026-04-17 03:54:46.294522 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-17 03:54:46.294556 | orchestrator | Friday 17 April 2026 03:54:46 +0000 (0:00:38.426) 0:07:20.449 ********** 2026-04-17 03:55:23.192161 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.192373 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.192395 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:23.192406 | orchestrator | 2026-04-17 03:55:23.192419 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-17 03:55:23.192431 | orchestrator | Friday 17 April 2026 03:54:46 +0000 (0:00:00.331) 0:07:20.781 ********** 2026-04-17 03:55:23.192443 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:55:23.192454 | orchestrator | 2026-04-17 03:55:23.192465 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-17 03:55:23.192476 | orchestrator | Friday 17 April 2026 03:54:47 +0000 (0:00:00.829) 0:07:21.610 ********** 2026-04-17 03:55:23.192487 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:23.192499 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:23.192510 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:23.192520 | orchestrator | 2026-04-17 03:55:23.192531 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-17 03:55:23.192542 | orchestrator | Friday 17 April 2026 03:54:48 +0000 (0:00:00.726) 0:07:22.337 ********** 2026-04-17 03:55:23.192553 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:23.192564 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:23.192576 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:23.192586 | orchestrator | 2026-04-17 03:55:23.192602 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-17 03:55:23.192625 | orchestrator | Friday 17 April 2026 03:54:50 +0000 (0:00:02.453) 0:07:24.790 ********** 2026-04-17 03:55:23.192653 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:55:23.192673 | orchestrator | 2026-04-17 03:55:23.192691 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-17 03:55:23.192709 | orchestrator | Friday 17 April 2026 03:54:51 +0000 (0:00:00.833) 0:07:25.623 ********** 2026-04-17 03:55:23.192727 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:55:23.192745 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:55:23.192764 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:55:23.192782 | orchestrator | 2026-04-17 03:55:23.192800 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-17 03:55:23.192819 | orchestrator | Friday 17 April 2026 03:54:52 +0000 (0:00:01.181) 0:07:26.804 ********** 2026-04-17 03:55:23.192839 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:55:23.192932 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:55:23.192946 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:55:23.192957 | orchestrator | 2026-04-17 03:55:23.192968 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-17 03:55:23.192978 | orchestrator | Friday 17 April 2026 03:54:53 +0000 (0:00:01.139) 0:07:27.944 ********** 2026-04-17 03:55:23.192989 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:55:23.192999 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:55:23.193010 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:55:23.193020 | orchestrator | 2026-04-17 03:55:23.193031 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-17 03:55:23.193043 | orchestrator | Friday 17 April 2026 03:54:55 +0000 (0:00:01.721) 0:07:29.665 ********** 2026-04-17 03:55:23.193062 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.193079 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.193095 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:23.193112 | orchestrator | 2026-04-17 03:55:23.193130 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-17 03:55:23.193147 | orchestrator | Friday 17 April 2026 03:54:55 +0000 (0:00:00.288) 0:07:29.954 ********** 2026-04-17 03:55:23.193164 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.193208 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.193226 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:23.193245 | orchestrator | 2026-04-17 03:55:23.193264 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-17 03:55:23.193281 | orchestrator | Friday 17 April 2026 03:54:56 +0000 (0:00:00.284) 0:07:30.239 ********** 2026-04-17 03:55:23.193300 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-17 03:55:23.193323 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-17 03:55:23.193352 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-17 03:55:23.193391 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 03:55:23.193408 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-17 03:55:23.193424 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-17 03:55:23.193441 | orchestrator | 2026-04-17 03:55:23.193458 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-17 03:55:23.193475 | orchestrator | Friday 17 April 2026 03:54:57 +0000 (0:00:00.963) 0:07:31.202 ********** 2026-04-17 03:55:23.193493 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-17 03:55:23.193512 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-17 03:55:23.193531 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-17 03:55:23.193550 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-17 03:55:23.193569 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-17 03:55:23.193588 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-17 03:55:23.193607 | orchestrator | 2026-04-17 03:55:23.193618 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-17 03:55:23.193629 | orchestrator | Friday 17 April 2026 03:54:59 +0000 (0:00:02.385) 0:07:33.587 ********** 2026-04-17 03:55:23.193640 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-17 03:55:23.193651 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-17 03:55:23.193662 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-17 03:55:23.193672 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-17 03:55:23.193683 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-17 03:55:23.193694 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-17 03:55:23.193704 | orchestrator | 2026-04-17 03:55:23.193715 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-17 03:55:23.193726 | orchestrator | Friday 17 April 2026 03:55:02 +0000 (0:00:03.280) 0:07:36.868 ********** 2026-04-17 03:55:23.193762 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.193773 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.193784 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:55:23.193809 | orchestrator | 2026-04-17 03:55:23.193821 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-17 03:55:23.193831 | orchestrator | Friday 17 April 2026 03:55:04 +0000 (0:00:02.006) 0:07:38.875 ********** 2026-04-17 03:55:23.193842 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.193852 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.193863 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-17 03:55:23.193875 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:55:23.193886 | orchestrator | 2026-04-17 03:55:23.193897 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-17 03:55:23.193907 | orchestrator | Friday 17 April 2026 03:55:17 +0000 (0:00:12.564) 0:07:51.439 ********** 2026-04-17 03:55:23.193918 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.193928 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.193939 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:23.193949 | orchestrator | 2026-04-17 03:55:23.193960 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 03:55:23.193971 | orchestrator | Friday 17 April 2026 03:55:18 +0000 (0:00:01.246) 0:07:52.686 ********** 2026-04-17 03:55:23.193982 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.193993 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.194003 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:23.194108 | orchestrator | 2026-04-17 03:55:23.194136 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 03:55:23.194155 | orchestrator | Friday 17 April 2026 03:55:19 +0000 (0:00:00.676) 0:07:53.362 ********** 2026-04-17 03:55:23.194217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:55:23.194238 | orchestrator | 2026-04-17 03:55:23.194270 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-17 03:55:23.194289 | orchestrator | Friday 17 April 2026 03:55:19 +0000 (0:00:00.579) 0:07:53.941 ********** 2026-04-17 03:55:23.194308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:55:23.194327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:55:23.194345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:55:23.194361 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194373 | orchestrator | 2026-04-17 03:55:23.194383 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-17 03:55:23.194394 | orchestrator | Friday 17 April 2026 03:55:20 +0000 (0:00:00.432) 0:07:54.373 ********** 2026-04-17 03:55:23.194405 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194415 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.194426 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:23.194436 | orchestrator | 2026-04-17 03:55:23.194447 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-17 03:55:23.194458 | orchestrator | Friday 17 April 2026 03:55:20 +0000 (0:00:00.340) 0:07:54.714 ********** 2026-04-17 03:55:23.194468 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194479 | orchestrator | 2026-04-17 03:55:23.194490 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-17 03:55:23.194500 | orchestrator | Friday 17 April 2026 03:55:20 +0000 (0:00:00.271) 0:07:54.985 ********** 2026-04-17 03:55:23.194511 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194522 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:23.194532 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:23.194543 | orchestrator | 2026-04-17 03:55:23.194553 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-17 03:55:23.194567 | orchestrator | Friday 17 April 2026 03:55:21 +0000 (0:00:00.628) 0:07:55.614 ********** 2026-04-17 03:55:23.194586 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194606 | orchestrator | 2026-04-17 03:55:23.194633 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-17 03:55:23.194668 | orchestrator | Friday 17 April 2026 03:55:21 +0000 (0:00:00.253) 0:07:55.868 ********** 2026-04-17 03:55:23.194698 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194715 | orchestrator | 2026-04-17 03:55:23.194730 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-17 03:55:23.194747 | orchestrator | Friday 17 April 2026 03:55:21 +0000 (0:00:00.240) 0:07:56.109 ********** 2026-04-17 03:55:23.194766 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194784 | orchestrator | 2026-04-17 03:55:23.194802 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-17 03:55:23.194821 | orchestrator | Friday 17 April 2026 03:55:22 +0000 (0:00:00.134) 0:07:56.243 ********** 2026-04-17 03:55:23.194839 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194858 | orchestrator | 2026-04-17 03:55:23.194876 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-17 03:55:23.194893 | orchestrator | Friday 17 April 2026 03:55:22 +0000 (0:00:00.235) 0:07:56.479 ********** 2026-04-17 03:55:23.194910 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.194928 | orchestrator | 2026-04-17 03:55:23.194947 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-17 03:55:23.194963 | orchestrator | Friday 17 April 2026 03:55:22 +0000 (0:00:00.251) 0:07:56.730 ********** 2026-04-17 03:55:23.194974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:55:23.194986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:55:23.194996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:55:23.195007 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:23.195018 | orchestrator | 2026-04-17 03:55:23.195028 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-17 03:55:23.195039 | orchestrator | Friday 17 April 2026 03:55:22 +0000 (0:00:00.437) 0:07:57.168 ********** 2026-04-17 03:55:23.195067 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.378534 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.378655 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.378671 | orchestrator | 2026-04-17 03:55:43.378681 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-17 03:55:43.378690 | orchestrator | Friday 17 April 2026 03:55:23 +0000 (0:00:00.615) 0:07:57.783 ********** 2026-04-17 03:55:43.378698 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.378705 | orchestrator | 2026-04-17 03:55:43.378713 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-17 03:55:43.378720 | orchestrator | Friday 17 April 2026 03:55:23 +0000 (0:00:00.242) 0:07:58.026 ********** 2026-04-17 03:55:43.378728 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.378735 | orchestrator | 2026-04-17 03:55:43.378742 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-17 03:55:43.378750 | orchestrator | 2026-04-17 03:55:43.378757 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 03:55:43.378764 | orchestrator | Friday 17 April 2026 03:55:24 +0000 (0:00:00.688) 0:07:58.714 ********** 2026-04-17 03:55:43.378772 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:55:43.378781 | orchestrator | 2026-04-17 03:55:43.378789 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 03:55:43.378796 | orchestrator | Friday 17 April 2026 03:55:25 +0000 (0:00:01.411) 0:08:00.125 ********** 2026-04-17 03:55:43.378803 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:55:43.378811 | orchestrator | 2026-04-17 03:55:43.378818 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 03:55:43.378847 | orchestrator | Friday 17 April 2026 03:55:27 +0000 (0:00:01.323) 0:08:01.449 ********** 2026-04-17 03:55:43.378855 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.378862 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.378869 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.378876 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:55:43.378884 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:55:43.378891 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:55:43.378898 | orchestrator | 2026-04-17 03:55:43.378905 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 03:55:43.378913 | orchestrator | Friday 17 April 2026 03:55:28 +0000 (0:00:01.302) 0:08:02.752 ********** 2026-04-17 03:55:43.378920 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.378927 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.378934 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.378941 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.378948 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.378955 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.378963 | orchestrator | 2026-04-17 03:55:43.378970 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 03:55:43.378977 | orchestrator | Friday 17 April 2026 03:55:29 +0000 (0:00:00.770) 0:08:03.522 ********** 2026-04-17 03:55:43.378984 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.378991 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.378998 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.379006 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.379014 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.379027 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.379039 | orchestrator | 2026-04-17 03:55:43.379051 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 03:55:43.379063 | orchestrator | Friday 17 April 2026 03:55:30 +0000 (0:00:00.984) 0:08:04.506 ********** 2026-04-17 03:55:43.379073 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.379085 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.379098 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.379109 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.379121 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.379132 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.379144 | orchestrator | 2026-04-17 03:55:43.379180 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 03:55:43.379211 | orchestrator | Friday 17 April 2026 03:55:31 +0000 (0:00:00.710) 0:08:05.217 ********** 2026-04-17 03:55:43.379224 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.379236 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.379248 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.379261 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:55:43.379272 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:55:43.379283 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:55:43.379295 | orchestrator | 2026-04-17 03:55:43.379308 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 03:55:43.379320 | orchestrator | Friday 17 April 2026 03:55:32 +0000 (0:00:01.333) 0:08:06.551 ********** 2026-04-17 03:55:43.379333 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.379345 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.379358 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.379371 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.379384 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.379397 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.379410 | orchestrator | 2026-04-17 03:55:43.379423 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 03:55:43.379434 | orchestrator | Friday 17 April 2026 03:55:32 +0000 (0:00:00.597) 0:08:07.149 ********** 2026-04-17 03:55:43.379447 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.379459 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.379470 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.379498 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.379512 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.379523 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.379535 | orchestrator | 2026-04-17 03:55:43.379546 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 03:55:43.379557 | orchestrator | Friday 17 April 2026 03:55:33 +0000 (0:00:00.861) 0:08:08.010 ********** 2026-04-17 03:55:43.379569 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.379603 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.379616 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.379629 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:55:43.379640 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:55:43.379652 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:55:43.379664 | orchestrator | 2026-04-17 03:55:43.379672 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 03:55:43.379679 | orchestrator | Friday 17 April 2026 03:55:34 +0000 (0:00:01.052) 0:08:09.063 ********** 2026-04-17 03:55:43.379687 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.379694 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.379701 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.379708 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:55:43.379715 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:55:43.379722 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:55:43.379729 | orchestrator | 2026-04-17 03:55:43.379736 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 03:55:43.379743 | orchestrator | Friday 17 April 2026 03:55:36 +0000 (0:00:01.320) 0:08:10.384 ********** 2026-04-17 03:55:43.379750 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.379757 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.379764 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.379771 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.379779 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.379786 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.379793 | orchestrator | 2026-04-17 03:55:43.379800 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 03:55:43.379807 | orchestrator | Friday 17 April 2026 03:55:36 +0000 (0:00:00.626) 0:08:11.011 ********** 2026-04-17 03:55:43.379814 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.379821 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.379828 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.379836 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:55:43.379843 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:55:43.379850 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:55:43.379857 | orchestrator | 2026-04-17 03:55:43.379864 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 03:55:43.379871 | orchestrator | Friday 17 April 2026 03:55:37 +0000 (0:00:00.885) 0:08:11.897 ********** 2026-04-17 03:55:43.379878 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.379885 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.379892 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.379899 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.379907 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.379914 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.379921 | orchestrator | 2026-04-17 03:55:43.379928 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 03:55:43.379935 | orchestrator | Friday 17 April 2026 03:55:38 +0000 (0:00:00.600) 0:08:12.498 ********** 2026-04-17 03:55:43.379942 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.379949 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.379956 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.379964 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.379972 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.379980 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.379989 | orchestrator | 2026-04-17 03:55:43.379997 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 03:55:43.380014 | orchestrator | Friday 17 April 2026 03:55:39 +0000 (0:00:00.928) 0:08:13.426 ********** 2026-04-17 03:55:43.380022 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.380030 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.380039 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.380047 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.380056 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.380063 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.380071 | orchestrator | 2026-04-17 03:55:43.380078 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 03:55:43.380085 | orchestrator | Friday 17 April 2026 03:55:39 +0000 (0:00:00.588) 0:08:14.015 ********** 2026-04-17 03:55:43.380092 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.380099 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.380106 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.380113 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.380120 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.380127 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.380134 | orchestrator | 2026-04-17 03:55:43.380141 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 03:55:43.380149 | orchestrator | Friday 17 April 2026 03:55:40 +0000 (0:00:00.927) 0:08:14.942 ********** 2026-04-17 03:55:43.380228 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.380240 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.380252 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.380262 | orchestrator | skipping: [testbed-node-0] 2026-04-17 03:55:43.380269 | orchestrator | skipping: [testbed-node-1] 2026-04-17 03:55:43.380276 | orchestrator | skipping: [testbed-node-2] 2026-04-17 03:55:43.380283 | orchestrator | 2026-04-17 03:55:43.380291 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 03:55:43.380298 | orchestrator | Friday 17 April 2026 03:55:41 +0000 (0:00:00.597) 0:08:15.540 ********** 2026-04-17 03:55:43.380305 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:55:43.380312 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:55:43.380320 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:55:43.380327 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:55:43.380334 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:55:43.380341 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:55:43.380348 | orchestrator | 2026-04-17 03:55:43.380356 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 03:55:43.380363 | orchestrator | Friday 17 April 2026 03:55:42 +0000 (0:00:00.900) 0:08:16.440 ********** 2026-04-17 03:55:43.380370 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:55:43.380377 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:55:43.380384 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:55:43.380391 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:55:43.380397 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:55:43.380404 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:55:43.380410 | orchestrator | 2026-04-17 03:55:43.380417 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 03:55:43.380424 | orchestrator | Friday 17 April 2026 03:55:42 +0000 (0:00:00.659) 0:08:17.099 ********** 2026-04-17 03:55:43.380437 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.785714 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.785817 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.785830 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:56:14.785839 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:56:14.785848 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:56:14.785856 | orchestrator | 2026-04-17 03:56:14.785866 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-17 03:56:14.785875 | orchestrator | Friday 17 April 2026 03:55:44 +0000 (0:00:01.422) 0:08:18.522 ********** 2026-04-17 03:56:14.785885 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:56:14.785893 | orchestrator | 2026-04-17 03:56:14.785922 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-17 03:56:14.785930 | orchestrator | Friday 17 April 2026 03:55:48 +0000 (0:00:04.421) 0:08:22.943 ********** 2026-04-17 03:56:14.785939 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:56:14.785947 | orchestrator | 2026-04-17 03:56:14.785955 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-17 03:56:14.785963 | orchestrator | Friday 17 April 2026 03:55:50 +0000 (0:00:02.057) 0:08:25.001 ********** 2026-04-17 03:56:14.785971 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:14.785979 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:14.785987 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:14.785995 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:56:14.786003 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:56:14.786011 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:56:14.786109 | orchestrator | 2026-04-17 03:56:14.786118 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-17 03:56:14.786153 | orchestrator | Friday 17 April 2026 03:55:52 +0000 (0:00:01.499) 0:08:26.501 ********** 2026-04-17 03:56:14.786161 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:14.786194 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:14.786204 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:14.786213 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:56:14.786222 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:56:14.786231 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:56:14.786241 | orchestrator | 2026-04-17 03:56:14.786250 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-17 03:56:14.786260 | orchestrator | Friday 17 April 2026 03:55:53 +0000 (0:00:01.271) 0:08:27.773 ********** 2026-04-17 03:56:14.786270 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:56:14.786280 | orchestrator | 2026-04-17 03:56:14.786289 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-17 03:56:14.786299 | orchestrator | Friday 17 April 2026 03:55:54 +0000 (0:00:01.363) 0:08:29.136 ********** 2026-04-17 03:56:14.786308 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:14.786330 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:14.786340 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:14.786370 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:56:14.786389 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:56:14.786402 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:56:14.786415 | orchestrator | 2026-04-17 03:56:14.786428 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-17 03:56:14.786441 | orchestrator | Friday 17 April 2026 03:55:56 +0000 (0:00:01.576) 0:08:30.713 ********** 2026-04-17 03:56:14.786453 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:14.786465 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:14.786478 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:14.786492 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:56:14.786506 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:56:14.786520 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:56:14.786534 | orchestrator | 2026-04-17 03:56:14.786547 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-17 03:56:14.786560 | orchestrator | Friday 17 April 2026 03:56:00 +0000 (0:00:03.739) 0:08:34.453 ********** 2026-04-17 03:56:14.786573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 03:56:14.786582 | orchestrator | 2026-04-17 03:56:14.786598 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-17 03:56:14.786607 | orchestrator | Friday 17 April 2026 03:56:01 +0000 (0:00:01.377) 0:08:35.831 ********** 2026-04-17 03:56:14.786615 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.786633 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.786641 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.786649 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:56:14.786656 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:56:14.786664 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:56:14.786671 | orchestrator | 2026-04-17 03:56:14.786679 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-17 03:56:14.786687 | orchestrator | Friday 17 April 2026 03:56:02 +0000 (0:00:00.656) 0:08:36.487 ********** 2026-04-17 03:56:14.786696 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:14.786709 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:14.786723 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:14.786745 | orchestrator | changed: [testbed-node-0] 2026-04-17 03:56:14.786758 | orchestrator | changed: [testbed-node-1] 2026-04-17 03:56:14.786770 | orchestrator | changed: [testbed-node-2] 2026-04-17 03:56:14.786783 | orchestrator | 2026-04-17 03:56:14.786795 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-17 03:56:14.786808 | orchestrator | Friday 17 April 2026 03:56:04 +0000 (0:00:02.508) 0:08:38.996 ********** 2026-04-17 03:56:14.786820 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.786831 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.786843 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.786855 | orchestrator | ok: [testbed-node-0] 2026-04-17 03:56:14.786866 | orchestrator | ok: [testbed-node-1] 2026-04-17 03:56:14.786878 | orchestrator | ok: [testbed-node-2] 2026-04-17 03:56:14.786891 | orchestrator | 2026-04-17 03:56:14.786904 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-17 03:56:14.786918 | orchestrator | 2026-04-17 03:56:14.786929 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 03:56:14.786967 | orchestrator | Friday 17 April 2026 03:56:06 +0000 (0:00:01.324) 0:08:40.321 ********** 2026-04-17 03:56:14.786983 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:56:14.786997 | orchestrator | 2026-04-17 03:56:14.787010 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 03:56:14.787025 | orchestrator | Friday 17 April 2026 03:56:06 +0000 (0:00:00.538) 0:08:40.859 ********** 2026-04-17 03:56:14.787039 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:56:14.787052 | orchestrator | 2026-04-17 03:56:14.787065 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 03:56:14.787078 | orchestrator | Friday 17 April 2026 03:56:07 +0000 (0:00:00.818) 0:08:41.677 ********** 2026-04-17 03:56:14.787089 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:14.787097 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:14.787105 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:14.787113 | orchestrator | 2026-04-17 03:56:14.787151 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 03:56:14.787160 | orchestrator | Friday 17 April 2026 03:56:07 +0000 (0:00:00.347) 0:08:42.024 ********** 2026-04-17 03:56:14.787169 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.787177 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.787184 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.787192 | orchestrator | 2026-04-17 03:56:14.787200 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 03:56:14.787208 | orchestrator | Friday 17 April 2026 03:56:08 +0000 (0:00:00.710) 0:08:42.735 ********** 2026-04-17 03:56:14.787216 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.787223 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.787231 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.787239 | orchestrator | 2026-04-17 03:56:14.787247 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 03:56:14.787254 | orchestrator | Friday 17 April 2026 03:56:09 +0000 (0:00:00.712) 0:08:43.447 ********** 2026-04-17 03:56:14.787272 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.787279 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.787287 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.787295 | orchestrator | 2026-04-17 03:56:14.787303 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 03:56:14.787310 | orchestrator | Friday 17 April 2026 03:56:10 +0000 (0:00:01.081) 0:08:44.529 ********** 2026-04-17 03:56:14.787318 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:14.787326 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:14.787334 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:14.787342 | orchestrator | 2026-04-17 03:56:14.787350 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 03:56:14.787357 | orchestrator | Friday 17 April 2026 03:56:10 +0000 (0:00:00.370) 0:08:44.900 ********** 2026-04-17 03:56:14.787365 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:14.787373 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:14.787381 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:14.787388 | orchestrator | 2026-04-17 03:56:14.787396 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 03:56:14.787404 | orchestrator | Friday 17 April 2026 03:56:11 +0000 (0:00:00.335) 0:08:45.235 ********** 2026-04-17 03:56:14.787412 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:14.787419 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:14.787427 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:14.787435 | orchestrator | 2026-04-17 03:56:14.787443 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 03:56:14.787450 | orchestrator | Friday 17 April 2026 03:56:11 +0000 (0:00:00.294) 0:08:45.530 ********** 2026-04-17 03:56:14.787458 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.787466 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.787474 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.787482 | orchestrator | 2026-04-17 03:56:14.787490 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 03:56:14.787497 | orchestrator | Friday 17 April 2026 03:56:12 +0000 (0:00:01.056) 0:08:46.586 ********** 2026-04-17 03:56:14.787505 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.787513 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.787530 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.787549 | orchestrator | 2026-04-17 03:56:14.787565 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 03:56:14.787579 | orchestrator | Friday 17 April 2026 03:56:13 +0000 (0:00:00.713) 0:08:47.300 ********** 2026-04-17 03:56:14.787591 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:14.787604 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:14.787616 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:14.787628 | orchestrator | 2026-04-17 03:56:14.787640 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 03:56:14.787652 | orchestrator | Friday 17 April 2026 03:56:13 +0000 (0:00:00.325) 0:08:47.625 ********** 2026-04-17 03:56:14.787663 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:14.787674 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:14.787686 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:14.787700 | orchestrator | 2026-04-17 03:56:14.787712 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 03:56:14.787724 | orchestrator | Friday 17 April 2026 03:56:13 +0000 (0:00:00.323) 0:08:47.949 ********** 2026-04-17 03:56:14.787736 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.787748 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.787760 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.787773 | orchestrator | 2026-04-17 03:56:14.787785 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 03:56:14.787799 | orchestrator | Friday 17 April 2026 03:56:14 +0000 (0:00:00.651) 0:08:48.600 ********** 2026-04-17 03:56:14.787812 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:14.787824 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:14.787850 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:14.787864 | orchestrator | 2026-04-17 03:56:14.787878 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 03:56:14.787904 | orchestrator | Friday 17 April 2026 03:56:14 +0000 (0:00:00.350) 0:08:48.951 ********** 2026-04-17 03:56:50.678785 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:50.678883 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:50.678891 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:50.678898 | orchestrator | 2026-04-17 03:56:50.678906 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 03:56:50.678914 | orchestrator | Friday 17 April 2026 03:56:15 +0000 (0:00:00.349) 0:08:49.300 ********** 2026-04-17 03:56:50.678921 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:50.678929 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:50.678935 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:50.678942 | orchestrator | 2026-04-17 03:56:50.678948 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 03:56:50.678954 | orchestrator | Friday 17 April 2026 03:56:15 +0000 (0:00:00.335) 0:08:49.636 ********** 2026-04-17 03:56:50.678958 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:50.678962 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:50.678966 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:50.678970 | orchestrator | 2026-04-17 03:56:50.678974 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 03:56:50.678978 | orchestrator | Friday 17 April 2026 03:56:16 +0000 (0:00:00.638) 0:08:50.274 ********** 2026-04-17 03:56:50.678982 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:50.678986 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:50.678990 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:50.678993 | orchestrator | 2026-04-17 03:56:50.678997 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 03:56:50.679001 | orchestrator | Friday 17 April 2026 03:56:16 +0000 (0:00:00.332) 0:08:50.606 ********** 2026-04-17 03:56:50.679004 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:50.679008 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:50.679012 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:50.679016 | orchestrator | 2026-04-17 03:56:50.679019 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 03:56:50.679023 | orchestrator | Friday 17 April 2026 03:56:16 +0000 (0:00:00.374) 0:08:50.981 ********** 2026-04-17 03:56:50.679027 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:50.679030 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:50.679034 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:50.679038 | orchestrator | 2026-04-17 03:56:50.679042 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-17 03:56:50.679045 | orchestrator | Friday 17 April 2026 03:56:17 +0000 (0:00:00.887) 0:08:51.868 ********** 2026-04-17 03:56:50.679049 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:50.679053 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:50.679057 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-17 03:56:50.679061 | orchestrator | 2026-04-17 03:56:50.679065 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-17 03:56:50.679069 | orchestrator | Friday 17 April 2026 03:56:18 +0000 (0:00:00.480) 0:08:52.349 ********** 2026-04-17 03:56:50.679072 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:56:50.679076 | orchestrator | 2026-04-17 03:56:50.679080 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-17 03:56:50.679084 | orchestrator | Friday 17 April 2026 03:56:20 +0000 (0:00:02.079) 0:08:54.429 ********** 2026-04-17 03:56:50.679088 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-17 03:56:50.679150 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:50.679157 | orchestrator | 2026-04-17 03:56:50.679164 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-17 03:56:50.679170 | orchestrator | Friday 17 April 2026 03:56:20 +0000 (0:00:00.308) 0:08:54.737 ********** 2026-04-17 03:56:50.679193 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 03:56:50.679206 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 03:56:50.679210 | orchestrator | 2026-04-17 03:56:50.679214 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-17 03:56:50.679218 | orchestrator | Friday 17 April 2026 03:56:27 +0000 (0:00:07.050) 0:09:01.787 ********** 2026-04-17 03:56:50.679222 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 03:56:50.679225 | orchestrator | 2026-04-17 03:56:50.679229 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-17 03:56:50.679233 | orchestrator | Friday 17 April 2026 03:56:31 +0000 (0:00:04.178) 0:09:05.966 ********** 2026-04-17 03:56:50.679237 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:56:50.679241 | orchestrator | 2026-04-17 03:56:50.679245 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-17 03:56:50.679249 | orchestrator | Friday 17 April 2026 03:56:32 +0000 (0:00:00.604) 0:09:06.570 ********** 2026-04-17 03:56:50.679253 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 03:56:50.679257 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 03:56:50.679260 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 03:56:50.679277 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-17 03:56:50.679281 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-17 03:56:50.679285 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-17 03:56:50.679289 | orchestrator | 2026-04-17 03:56:50.679293 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-17 03:56:50.679296 | orchestrator | Friday 17 April 2026 03:56:33 +0000 (0:00:01.044) 0:09:07.614 ********** 2026-04-17 03:56:50.679300 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:56:50.679304 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 03:56:50.679308 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 03:56:50.679312 | orchestrator | 2026-04-17 03:56:50.679316 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-17 03:56:50.679319 | orchestrator | Friday 17 April 2026 03:56:35 +0000 (0:00:02.062) 0:09:09.677 ********** 2026-04-17 03:56:50.679323 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 03:56:50.679328 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 03:56:50.679333 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:50.679337 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 03:56:50.679342 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 03:56:50.679346 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:50.679350 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 03:56:50.679354 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 03:56:50.679359 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:50.679363 | orchestrator | 2026-04-17 03:56:50.679372 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-17 03:56:50.679377 | orchestrator | Friday 17 April 2026 03:56:36 +0000 (0:00:01.433) 0:09:11.110 ********** 2026-04-17 03:56:50.679381 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:50.679386 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:50.679390 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:50.679394 | orchestrator | 2026-04-17 03:56:50.679399 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-17 03:56:50.679403 | orchestrator | Friday 17 April 2026 03:56:39 +0000 (0:00:02.744) 0:09:13.855 ********** 2026-04-17 03:56:50.679407 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:56:50.679412 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:56:50.679416 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:56:50.679420 | orchestrator | 2026-04-17 03:56:50.679424 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-17 03:56:50.679429 | orchestrator | Friday 17 April 2026 03:56:40 +0000 (0:00:00.329) 0:09:14.184 ********** 2026-04-17 03:56:50.679433 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:56:50.679438 | orchestrator | 2026-04-17 03:56:50.679443 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-17 03:56:50.679447 | orchestrator | Friday 17 April 2026 03:56:40 +0000 (0:00:00.823) 0:09:15.008 ********** 2026-04-17 03:56:50.679451 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:56:50.679456 | orchestrator | 2026-04-17 03:56:50.679460 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-17 03:56:50.679464 | orchestrator | Friday 17 April 2026 03:56:41 +0000 (0:00:00.550) 0:09:15.559 ********** 2026-04-17 03:56:50.679469 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:50.679473 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:50.679477 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:50.679482 | orchestrator | 2026-04-17 03:56:50.679486 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-17 03:56:50.679490 | orchestrator | Friday 17 April 2026 03:56:42 +0000 (0:00:01.216) 0:09:16.776 ********** 2026-04-17 03:56:50.679494 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:50.679499 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:50.679506 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:50.679511 | orchestrator | 2026-04-17 03:56:50.679515 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-17 03:56:50.679520 | orchestrator | Friday 17 April 2026 03:56:44 +0000 (0:00:01.449) 0:09:18.225 ********** 2026-04-17 03:56:50.679524 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:50.679528 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:50.679532 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:50.679537 | orchestrator | 2026-04-17 03:56:50.679541 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-17 03:56:50.679545 | orchestrator | Friday 17 April 2026 03:56:45 +0000 (0:00:01.691) 0:09:19.917 ********** 2026-04-17 03:56:50.679550 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:50.679554 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:50.679558 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:50.679563 | orchestrator | 2026-04-17 03:56:50.679567 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-17 03:56:50.679571 | orchestrator | Friday 17 April 2026 03:56:47 +0000 (0:00:01.817) 0:09:21.734 ********** 2026-04-17 03:56:50.679576 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:56:50.679580 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:56:50.679585 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:56:50.679589 | orchestrator | 2026-04-17 03:56:50.679593 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 03:56:50.679597 | orchestrator | Friday 17 April 2026 03:56:49 +0000 (0:00:01.513) 0:09:23.248 ********** 2026-04-17 03:56:50.679605 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:56:50.679610 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:56:50.679614 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:56:50.679619 | orchestrator | 2026-04-17 03:56:50.679623 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 03:56:50.679627 | orchestrator | Friday 17 April 2026 03:56:49 +0000 (0:00:00.704) 0:09:23.953 ********** 2026-04-17 03:56:50.679635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:57:09.793424 | orchestrator | 2026-04-17 03:57:09.793526 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-17 03:57:09.793540 | orchestrator | Friday 17 April 2026 03:56:50 +0000 (0:00:00.891) 0:09:24.844 ********** 2026-04-17 03:57:09.793550 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.793559 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.793568 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.793582 | orchestrator | 2026-04-17 03:57:09.793603 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-17 03:57:09.793619 | orchestrator | Friday 17 April 2026 03:56:51 +0000 (0:00:00.373) 0:09:25.218 ********** 2026-04-17 03:57:09.793632 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:57:09.793647 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:57:09.793660 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:57:09.793673 | orchestrator | 2026-04-17 03:57:09.793687 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-17 03:57:09.793700 | orchestrator | Friday 17 April 2026 03:56:52 +0000 (0:00:01.209) 0:09:26.427 ********** 2026-04-17 03:57:09.793714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:57:09.793723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:57:09.793732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:57:09.793740 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.793748 | orchestrator | 2026-04-17 03:57:09.793757 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-17 03:57:09.793765 | orchestrator | Friday 17 April 2026 03:56:53 +0000 (0:00:00.921) 0:09:27.349 ********** 2026-04-17 03:57:09.793773 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.793781 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.793788 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.793796 | orchestrator | 2026-04-17 03:57:09.793804 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-17 03:57:09.793812 | orchestrator | 2026-04-17 03:57:09.793820 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 03:57:09.793828 | orchestrator | Friday 17 April 2026 03:56:54 +0000 (0:00:00.931) 0:09:28.280 ********** 2026-04-17 03:57:09.793836 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:57:09.793845 | orchestrator | 2026-04-17 03:57:09.793853 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 03:57:09.793861 | orchestrator | Friday 17 April 2026 03:56:54 +0000 (0:00:00.541) 0:09:28.822 ********** 2026-04-17 03:57:09.793872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:57:09.793886 | orchestrator | 2026-04-17 03:57:09.793908 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 03:57:09.793921 | orchestrator | Friday 17 April 2026 03:56:55 +0000 (0:00:00.793) 0:09:29.615 ********** 2026-04-17 03:57:09.793933 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.793946 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.793959 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.793972 | orchestrator | 2026-04-17 03:57:09.793984 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 03:57:09.794103 | orchestrator | Friday 17 April 2026 03:56:55 +0000 (0:00:00.328) 0:09:29.944 ********** 2026-04-17 03:57:09.794125 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.794136 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.794150 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.794162 | orchestrator | 2026-04-17 03:57:09.794174 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 03:57:09.794188 | orchestrator | Friday 17 April 2026 03:56:56 +0000 (0:00:00.695) 0:09:30.639 ********** 2026-04-17 03:57:09.794202 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.794217 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.794227 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.794236 | orchestrator | 2026-04-17 03:57:09.794259 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 03:57:09.794269 | orchestrator | Friday 17 April 2026 03:56:57 +0000 (0:00:01.017) 0:09:31.656 ********** 2026-04-17 03:57:09.794278 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.794288 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.794297 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.794306 | orchestrator | 2026-04-17 03:57:09.794314 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 03:57:09.794324 | orchestrator | Friday 17 April 2026 03:56:58 +0000 (0:00:00.733) 0:09:32.389 ********** 2026-04-17 03:57:09.794337 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.794356 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.794371 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.794382 | orchestrator | 2026-04-17 03:57:09.794395 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 03:57:09.794407 | orchestrator | Friday 17 April 2026 03:56:58 +0000 (0:00:00.322) 0:09:32.712 ********** 2026-04-17 03:57:09.794420 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.794433 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.794444 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.794457 | orchestrator | 2026-04-17 03:57:09.794470 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 03:57:09.794483 | orchestrator | Friday 17 April 2026 03:56:58 +0000 (0:00:00.331) 0:09:33.044 ********** 2026-04-17 03:57:09.794497 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.794511 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.794525 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.794539 | orchestrator | 2026-04-17 03:57:09.794548 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 03:57:09.794556 | orchestrator | Friday 17 April 2026 03:56:59 +0000 (0:00:00.728) 0:09:33.772 ********** 2026-04-17 03:57:09.794564 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.794572 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.794579 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.794587 | orchestrator | 2026-04-17 03:57:09.794595 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 03:57:09.794622 | orchestrator | Friday 17 April 2026 03:57:00 +0000 (0:00:00.760) 0:09:34.533 ********** 2026-04-17 03:57:09.794631 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.794638 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.794646 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.794654 | orchestrator | 2026-04-17 03:57:09.794662 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 03:57:09.794670 | orchestrator | Friday 17 April 2026 03:57:01 +0000 (0:00:00.739) 0:09:35.272 ********** 2026-04-17 03:57:09.794678 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.794685 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.794693 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.794701 | orchestrator | 2026-04-17 03:57:09.794709 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 03:57:09.794717 | orchestrator | Friday 17 April 2026 03:57:01 +0000 (0:00:00.358) 0:09:35.631 ********** 2026-04-17 03:57:09.794736 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.794744 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.794752 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.794760 | orchestrator | 2026-04-17 03:57:09.794768 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 03:57:09.794776 | orchestrator | Friday 17 April 2026 03:57:02 +0000 (0:00:00.593) 0:09:36.225 ********** 2026-04-17 03:57:09.794784 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.794792 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.794799 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.794807 | orchestrator | 2026-04-17 03:57:09.794815 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 03:57:09.794823 | orchestrator | Friday 17 April 2026 03:57:02 +0000 (0:00:00.345) 0:09:36.570 ********** 2026-04-17 03:57:09.794831 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.794839 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.794846 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.794854 | orchestrator | 2026-04-17 03:57:09.794872 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 03:57:09.794880 | orchestrator | Friday 17 April 2026 03:57:02 +0000 (0:00:00.365) 0:09:36.936 ********** 2026-04-17 03:57:09.794888 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.794896 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.794904 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.794911 | orchestrator | 2026-04-17 03:57:09.794919 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 03:57:09.794927 | orchestrator | Friday 17 April 2026 03:57:03 +0000 (0:00:00.359) 0:09:37.296 ********** 2026-04-17 03:57:09.794935 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.794943 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.794951 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.794959 | orchestrator | 2026-04-17 03:57:09.794966 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 03:57:09.794974 | orchestrator | Friday 17 April 2026 03:57:03 +0000 (0:00:00.610) 0:09:37.906 ********** 2026-04-17 03:57:09.794982 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.794990 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.794998 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.795006 | orchestrator | 2026-04-17 03:57:09.795014 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 03:57:09.795021 | orchestrator | Friday 17 April 2026 03:57:04 +0000 (0:00:00.343) 0:09:38.249 ********** 2026-04-17 03:57:09.795029 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:57:09.795037 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:57:09.795045 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:57:09.795053 | orchestrator | 2026-04-17 03:57:09.795061 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 03:57:09.795070 | orchestrator | Friday 17 April 2026 03:57:04 +0000 (0:00:00.332) 0:09:38.582 ********** 2026-04-17 03:57:09.795106 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.795125 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.795141 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.795154 | orchestrator | 2026-04-17 03:57:09.795166 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 03:57:09.795189 | orchestrator | Friday 17 April 2026 03:57:04 +0000 (0:00:00.372) 0:09:38.954 ********** 2026-04-17 03:57:09.795203 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:57:09.795216 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:57:09.795229 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:57:09.795237 | orchestrator | 2026-04-17 03:57:09.795246 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-17 03:57:09.795253 | orchestrator | Friday 17 April 2026 03:57:05 +0000 (0:00:00.903) 0:09:39.858 ********** 2026-04-17 03:57:09.795262 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:57:09.795278 | orchestrator | 2026-04-17 03:57:09.795286 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 03:57:09.795294 | orchestrator | Friday 17 April 2026 03:57:06 +0000 (0:00:00.801) 0:09:40.660 ********** 2026-04-17 03:57:09.795302 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:57:09.795310 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 03:57:09.795318 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 03:57:09.795326 | orchestrator | 2026-04-17 03:57:09.795334 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 03:57:09.795342 | orchestrator | Friday 17 April 2026 03:57:08 +0000 (0:00:02.033) 0:09:42.694 ********** 2026-04-17 03:57:09.795350 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 03:57:09.795358 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 03:57:09.795366 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:57:09.795374 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 03:57:09.795381 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 03:57:09.795389 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:57:09.795397 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 03:57:09.795405 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 03:57:09.795413 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:57:09.795421 | orchestrator | 2026-04-17 03:57:09.795436 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-17 03:58:00.492110 | orchestrator | Friday 17 April 2026 03:57:09 +0000 (0:00:01.258) 0:09:43.953 ********** 2026-04-17 03:58:00.492233 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:00.492247 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:00.492253 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:00.492282 | orchestrator | 2026-04-17 03:58:00.492290 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-17 03:58:00.492296 | orchestrator | Friday 17 April 2026 03:57:10 +0000 (0:00:00.345) 0:09:44.298 ********** 2026-04-17 03:58:00.492303 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:58:00.492309 | orchestrator | 2026-04-17 03:58:00.492316 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-17 03:58:00.492323 | orchestrator | Friday 17 April 2026 03:57:11 +0000 (0:00:00.972) 0:09:45.271 ********** 2026-04-17 03:58:00.492330 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 03:58:00.492338 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 03:58:00.492344 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 03:58:00.492350 | orchestrator | 2026-04-17 03:58:00.492356 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-17 03:58:00.492362 | orchestrator | Friday 17 April 2026 03:57:11 +0000 (0:00:00.881) 0:09:46.152 ********** 2026-04-17 03:58:00.492368 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:58:00.492376 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 03:58:00.492383 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:58:00.492389 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 03:58:00.492395 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:58:00.492425 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 03:58:00.492431 | orchestrator | 2026-04-17 03:58:00.492438 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 03:58:00.492443 | orchestrator | Friday 17 April 2026 03:57:16 +0000 (0:00:04.274) 0:09:50.426 ********** 2026-04-17 03:58:00.492447 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:58:00.492451 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 03:58:00.492455 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:58:00.492459 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 03:58:00.492463 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 03:58:00.492467 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 03:58:00.492471 | orchestrator | 2026-04-17 03:58:00.492485 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 03:58:00.492489 | orchestrator | Friday 17 April 2026 03:57:18 +0000 (0:00:02.252) 0:09:52.679 ********** 2026-04-17 03:58:00.492494 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 03:58:00.492498 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:58:00.492502 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 03:58:00.492505 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:58:00.492509 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 03:58:00.492513 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:58:00.492517 | orchestrator | 2026-04-17 03:58:00.492520 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-17 03:58:00.492524 | orchestrator | Friday 17 April 2026 03:57:20 +0000 (0:00:01.551) 0:09:54.231 ********** 2026-04-17 03:58:00.492529 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-17 03:58:00.492535 | orchestrator | 2026-04-17 03:58:00.492541 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-17 03:58:00.492550 | orchestrator | Friday 17 April 2026 03:57:20 +0000 (0:00:00.271) 0:09:54.502 ********** 2026-04-17 03:58:00.492558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492625 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:00.492632 | orchestrator | 2026-04-17 03:58:00.492646 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-17 03:58:00.492652 | orchestrator | Friday 17 April 2026 03:57:20 +0000 (0:00:00.651) 0:09:55.154 ********** 2026-04-17 03:58:00.492658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 03:58:00.492697 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:00.492702 | orchestrator | 2026-04-17 03:58:00.492707 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-17 03:58:00.492710 | orchestrator | Friday 17 April 2026 03:57:21 +0000 (0:00:00.640) 0:09:55.795 ********** 2026-04-17 03:58:00.492714 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 03:58:00.492719 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 03:58:00.492725 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 03:58:00.492731 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 03:58:00.492737 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 03:58:00.492743 | orchestrator | 2026-04-17 03:58:00.492753 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-17 03:58:00.492760 | orchestrator | Friday 17 April 2026 03:57:51 +0000 (0:00:29.655) 0:10:25.450 ********** 2026-04-17 03:58:00.492766 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:00.492772 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:00.492778 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:00.492784 | orchestrator | 2026-04-17 03:58:00.492790 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-17 03:58:00.492796 | orchestrator | Friday 17 April 2026 03:57:51 +0000 (0:00:00.353) 0:10:25.804 ********** 2026-04-17 03:58:00.492802 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:00.492808 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:00.492814 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:00.492820 | orchestrator | 2026-04-17 03:58:00.492826 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-17 03:58:00.492840 | orchestrator | Friday 17 April 2026 03:57:52 +0000 (0:00:00.705) 0:10:26.509 ********** 2026-04-17 03:58:00.492846 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:58:00.492852 | orchestrator | 2026-04-17 03:58:00.492858 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-17 03:58:00.492866 | orchestrator | Friday 17 April 2026 03:57:52 +0000 (0:00:00.582) 0:10:27.092 ********** 2026-04-17 03:58:00.492870 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:58:00.492874 | orchestrator | 2026-04-17 03:58:00.492878 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-17 03:58:00.492881 | orchestrator | Friday 17 April 2026 03:57:53 +0000 (0:00:00.800) 0:10:27.892 ********** 2026-04-17 03:58:00.492885 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:58:00.492889 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:58:00.492893 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:58:00.492897 | orchestrator | 2026-04-17 03:58:00.492901 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-17 03:58:00.492905 | orchestrator | Friday 17 April 2026 03:57:55 +0000 (0:00:01.289) 0:10:29.182 ********** 2026-04-17 03:58:00.492931 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:58:00.492939 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:58:00.492953 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:58:00.492959 | orchestrator | 2026-04-17 03:58:00.492965 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-17 03:58:00.492971 | orchestrator | Friday 17 April 2026 03:57:56 +0000 (0:00:01.151) 0:10:30.334 ********** 2026-04-17 03:58:00.492977 | orchestrator | changed: [testbed-node-4] 2026-04-17 03:58:00.492983 | orchestrator | changed: [testbed-node-3] 2026-04-17 03:58:00.492989 | orchestrator | changed: [testbed-node-5] 2026-04-17 03:58:00.492995 | orchestrator | 2026-04-17 03:58:00.493000 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-17 03:58:00.493005 | orchestrator | Friday 17 April 2026 03:57:57 +0000 (0:00:01.691) 0:10:32.025 ********** 2026-04-17 03:58:00.493020 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 03:58:04.759257 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 03:58:04.759366 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 03:58:04.759383 | orchestrator | 2026-04-17 03:58:04.759398 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 03:58:04.759412 | orchestrator | Friday 17 April 2026 03:58:00 +0000 (0:00:02.626) 0:10:34.652 ********** 2026-04-17 03:58:04.759426 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:04.759440 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:04.759454 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:04.759467 | orchestrator | 2026-04-17 03:58:04.759480 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 03:58:04.759493 | orchestrator | Friday 17 April 2026 03:58:00 +0000 (0:00:00.334) 0:10:34.987 ********** 2026-04-17 03:58:04.759506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:58:04.759520 | orchestrator | 2026-04-17 03:58:04.759533 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-17 03:58:04.759546 | orchestrator | Friday 17 April 2026 03:58:01 +0000 (0:00:00.838) 0:10:35.826 ********** 2026-04-17 03:58:04.759559 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:04.759573 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:04.759585 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:04.759597 | orchestrator | 2026-04-17 03:58:04.759610 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-17 03:58:04.759621 | orchestrator | Friday 17 April 2026 03:58:02 +0000 (0:00:00.358) 0:10:36.185 ********** 2026-04-17 03:58:04.759632 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:04.759643 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:04.759655 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:04.759667 | orchestrator | 2026-04-17 03:58:04.759680 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-17 03:58:04.759692 | orchestrator | Friday 17 April 2026 03:58:02 +0000 (0:00:00.357) 0:10:36.542 ********** 2026-04-17 03:58:04.759703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:58:04.759715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:58:04.759728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:58:04.759740 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:04.759753 | orchestrator | 2026-04-17 03:58:04.759766 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-17 03:58:04.759778 | orchestrator | Friday 17 April 2026 03:58:03 +0000 (0:00:01.392) 0:10:37.935 ********** 2026-04-17 03:58:04.759791 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:04.759805 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:04.759819 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:04.759835 | orchestrator | 2026-04-17 03:58:04.759848 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 03:58:04.759893 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-17 03:58:04.759910 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-17 03:58:04.759940 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-17 03:58:04.759954 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-17 03:58:04.759967 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-17 03:58:04.759980 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-17 03:58:04.759992 | orchestrator | 2026-04-17 03:58:04.760005 | orchestrator | 2026-04-17 03:58:04.760018 | orchestrator | 2026-04-17 03:58:04.760031 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 03:58:04.760071 | orchestrator | Friday 17 April 2026 03:58:04 +0000 (0:00:00.287) 0:10:38.223 ********** 2026-04-17 03:58:04.760085 | orchestrator | =============================================================================== 2026-04-17 03:58:04.760099 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 62.32s 2026-04-17 03:58:04.760113 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.43s 2026-04-17 03:58:04.760125 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.66s 2026-04-17 03:58:04.760138 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.32s 2026-04-17 03:58:04.760150 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.81s 2026-04-17 03:58:04.760162 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.01s 2026-04-17 03:58:04.760173 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.56s 2026-04-17 03:58:04.760186 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.71s 2026-04-17 03:58:04.760198 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.69s 2026-04-17 03:58:04.760231 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.05s 2026-04-17 03:58:04.760243 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.21s 2026-04-17 03:58:04.760255 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.07s 2026-04-17 03:58:04.760268 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.10s 2026-04-17 03:58:04.760279 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.42s 2026-04-17 03:58:04.760291 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.27s 2026-04-17 03:58:04.760303 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.18s 2026-04-17 03:58:04.760316 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.74s 2026-04-17 03:58:04.760329 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.41s 2026-04-17 03:58:04.760342 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.28s 2026-04-17 03:58:04.760356 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.13s 2026-04-17 03:58:07.452959 | orchestrator | 2026-04-17 03:58:07 | INFO  | Task fffdc41f-372b-41ef-834c-2cd1c89712cc (ceph-pools) was prepared for execution. 2026-04-17 03:58:07.453093 | orchestrator | 2026-04-17 03:58:07 | INFO  | It takes a moment until task fffdc41f-372b-41ef-834c-2cd1c89712cc (ceph-pools) has been started and output is visible here. 2026-04-17 03:58:21.698361 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 03:58:21.698469 | orchestrator | 2.16.14 2026-04-17 03:58:21.698481 | orchestrator | 2026-04-17 03:58:21.698489 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-17 03:58:21.698498 | orchestrator | 2026-04-17 03:58:21.698506 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 03:58:21.698513 | orchestrator | Friday 17 April 2026 03:58:11 +0000 (0:00:00.610) 0:00:00.610 ********** 2026-04-17 03:58:21.698520 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:58:21.698528 | orchestrator | 2026-04-17 03:58:21.698553 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 03:58:21.698560 | orchestrator | Friday 17 April 2026 03:58:12 +0000 (0:00:00.649) 0:00:01.260 ********** 2026-04-17 03:58:21.698568 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:21.698575 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:21.698582 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:21.698589 | orchestrator | 2026-04-17 03:58:21.698595 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 03:58:21.698601 | orchestrator | Friday 17 April 2026 03:58:13 +0000 (0:00:00.640) 0:00:01.900 ********** 2026-04-17 03:58:21.698607 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:21.698614 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:21.698620 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:21.698627 | orchestrator | 2026-04-17 03:58:21.698632 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 03:58:21.698638 | orchestrator | Friday 17 April 2026 03:58:13 +0000 (0:00:00.310) 0:00:02.211 ********** 2026-04-17 03:58:21.698645 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:21.698651 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:21.698657 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:21.698663 | orchestrator | 2026-04-17 03:58:21.698670 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 03:58:21.698694 | orchestrator | Friday 17 April 2026 03:58:14 +0000 (0:00:00.828) 0:00:03.040 ********** 2026-04-17 03:58:21.698703 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:21.698708 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:21.698715 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:21.698722 | orchestrator | 2026-04-17 03:58:21.698729 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 03:58:21.698735 | orchestrator | Friday 17 April 2026 03:58:14 +0000 (0:00:00.324) 0:00:03.365 ********** 2026-04-17 03:58:21.698741 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:21.698747 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:21.698754 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:21.698760 | orchestrator | 2026-04-17 03:58:21.698766 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 03:58:21.698774 | orchestrator | Friday 17 April 2026 03:58:15 +0000 (0:00:00.294) 0:00:03.660 ********** 2026-04-17 03:58:21.698780 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:21.698787 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:21.698793 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:21.698798 | orchestrator | 2026-04-17 03:58:21.698804 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 03:58:21.698810 | orchestrator | Friday 17 April 2026 03:58:15 +0000 (0:00:00.334) 0:00:03.994 ********** 2026-04-17 03:58:21.698816 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:21.698823 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:21.698829 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:21.698834 | orchestrator | 2026-04-17 03:58:21.698840 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 03:58:21.698846 | orchestrator | Friday 17 April 2026 03:58:15 +0000 (0:00:00.533) 0:00:04.528 ********** 2026-04-17 03:58:21.698873 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:21.698881 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:21.698888 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:21.698896 | orchestrator | 2026-04-17 03:58:21.698902 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 03:58:21.698909 | orchestrator | Friday 17 April 2026 03:58:16 +0000 (0:00:00.319) 0:00:04.848 ********** 2026-04-17 03:58:21.698916 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 03:58:21.698923 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 03:58:21.698929 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 03:58:21.698936 | orchestrator | 2026-04-17 03:58:21.698943 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 03:58:21.698951 | orchestrator | Friday 17 April 2026 03:58:16 +0000 (0:00:00.652) 0:00:05.501 ********** 2026-04-17 03:58:21.698959 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:21.698967 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:21.698975 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:21.698982 | orchestrator | 2026-04-17 03:58:21.698989 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 03:58:21.698997 | orchestrator | Friday 17 April 2026 03:58:17 +0000 (0:00:00.427) 0:00:05.928 ********** 2026-04-17 03:58:21.699004 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 03:58:21.699010 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 03:58:21.699017 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 03:58:21.699024 | orchestrator | 2026-04-17 03:58:21.699030 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 03:58:21.699064 | orchestrator | Friday 17 April 2026 03:58:19 +0000 (0:00:02.196) 0:00:08.125 ********** 2026-04-17 03:58:21.699070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 03:58:21.699078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 03:58:21.699084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 03:58:21.699091 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:21.699097 | orchestrator | 2026-04-17 03:58:21.699120 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 03:58:21.699127 | orchestrator | Friday 17 April 2026 03:58:20 +0000 (0:00:00.708) 0:00:08.834 ********** 2026-04-17 03:58:21.699136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 03:58:21.699145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 03:58:21.699152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 03:58:21.699158 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:21.699163 | orchestrator | 2026-04-17 03:58:21.699169 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 03:58:21.699174 | orchestrator | Friday 17 April 2026 03:58:21 +0000 (0:00:01.083) 0:00:09.917 ********** 2026-04-17 03:58:21.699189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:21.699206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:21.699214 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:21.699221 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:21.699227 | orchestrator | 2026-04-17 03:58:21.699233 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 03:58:21.699239 | orchestrator | Friday 17 April 2026 03:58:21 +0000 (0:00:00.174) 0:00:10.092 ********** 2026-04-17 03:58:21.699248 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'aa031f9a4b08', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 03:58:18.148205', 'end': '2026-04-17 03:58:18.189972', 'delta': '0:00:00.041767', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['aa031f9a4b08'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 03:58:21.699257 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9f8a3fd74f0b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 03:58:18.713451', 'end': '2026-04-17 03:58:18.758814', 'delta': '0:00:00.045363', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9f8a3fd74f0b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 03:58:21.699270 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f2e2f728469b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 03:58:19.298161', 'end': '2026-04-17 03:58:19.346206', 'delta': '0:00:00.048045', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2e2f728469b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 03:58:28.487433 | orchestrator | 2026-04-17 03:58:28.487540 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 03:58:28.487555 | orchestrator | Friday 17 April 2026 03:58:21 +0000 (0:00:00.225) 0:00:10.318 ********** 2026-04-17 03:58:28.487566 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:28.487575 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:28.487604 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:28.487614 | orchestrator | 2026-04-17 03:58:28.487622 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 03:58:28.487631 | orchestrator | Friday 17 April 2026 03:58:22 +0000 (0:00:00.452) 0:00:10.770 ********** 2026-04-17 03:58:28.487641 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-17 03:58:28.487650 | orchestrator | 2026-04-17 03:58:28.487658 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 03:58:28.487667 | orchestrator | Friday 17 April 2026 03:58:23 +0000 (0:00:01.603) 0:00:12.374 ********** 2026-04-17 03:58:28.487689 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.487698 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.487707 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.487715 | orchestrator | 2026-04-17 03:58:28.487724 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 03:58:28.487744 | orchestrator | Friday 17 April 2026 03:58:24 +0000 (0:00:00.321) 0:00:12.695 ********** 2026-04-17 03:58:28.487753 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.487761 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.487770 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.487778 | orchestrator | 2026-04-17 03:58:28.487787 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 03:58:28.487859 | orchestrator | Friday 17 April 2026 03:58:24 +0000 (0:00:00.742) 0:00:13.438 ********** 2026-04-17 03:58:28.487869 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.487878 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.487887 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.487895 | orchestrator | 2026-04-17 03:58:28.487904 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 03:58:28.487913 | orchestrator | Friday 17 April 2026 03:58:25 +0000 (0:00:00.293) 0:00:13.731 ********** 2026-04-17 03:58:28.487922 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:28.487931 | orchestrator | 2026-04-17 03:58:28.487941 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 03:58:28.487951 | orchestrator | Friday 17 April 2026 03:58:25 +0000 (0:00:00.173) 0:00:13.905 ********** 2026-04-17 03:58:28.487961 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.487971 | orchestrator | 2026-04-17 03:58:28.487981 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 03:58:28.487990 | orchestrator | Friday 17 April 2026 03:58:25 +0000 (0:00:00.252) 0:00:14.158 ********** 2026-04-17 03:58:28.488000 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.488010 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.488020 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.488052 | orchestrator | 2026-04-17 03:58:28.488063 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 03:58:28.488073 | orchestrator | Friday 17 April 2026 03:58:25 +0000 (0:00:00.331) 0:00:14.489 ********** 2026-04-17 03:58:28.488083 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.488093 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.488103 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.488112 | orchestrator | 2026-04-17 03:58:28.488122 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 03:58:28.488132 | orchestrator | Friday 17 April 2026 03:58:26 +0000 (0:00:00.527) 0:00:15.017 ********** 2026-04-17 03:58:28.488142 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.488152 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.488162 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.488172 | orchestrator | 2026-04-17 03:58:28.488181 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 03:58:28.488191 | orchestrator | Friday 17 April 2026 03:58:26 +0000 (0:00:00.339) 0:00:15.356 ********** 2026-04-17 03:58:28.488201 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.488222 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.488232 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.488242 | orchestrator | 2026-04-17 03:58:28.488252 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 03:58:28.488262 | orchestrator | Friday 17 April 2026 03:58:27 +0000 (0:00:00.355) 0:00:15.712 ********** 2026-04-17 03:58:28.488272 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.488282 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.488293 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.488303 | orchestrator | 2026-04-17 03:58:28.488312 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 03:58:28.488323 | orchestrator | Friday 17 April 2026 03:58:27 +0000 (0:00:00.322) 0:00:16.035 ********** 2026-04-17 03:58:28.488332 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.488341 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.488350 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.488358 | orchestrator | 2026-04-17 03:58:28.488367 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 03:58:28.488376 | orchestrator | Friday 17 April 2026 03:58:27 +0000 (0:00:00.497) 0:00:16.532 ********** 2026-04-17 03:58:28.488384 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.488393 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.488401 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.488410 | orchestrator | 2026-04-17 03:58:28.488418 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 03:58:28.488427 | orchestrator | Friday 17 April 2026 03:58:28 +0000 (0:00:00.362) 0:00:16.894 ********** 2026-04-17 03:58:28.488456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.488567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.538273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.538375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.538386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.538406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.538418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.538426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.538439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.538445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.538454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.538461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.538467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.538477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.643938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.644074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.644109 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:28.644121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.644161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.644192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.644209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.644219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.644236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.644245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.644256 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:28.644265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.644274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.644290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.947896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.947994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.948087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.948120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.948135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.948181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 03:58:28.948247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.948282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.948302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.948320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.948338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 03:58:28.948350 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:28.948363 | orchestrator | 2026-04-17 03:58:28.948374 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 03:58:28.948385 | orchestrator | Friday 17 April 2026 03:58:28 +0000 (0:00:00.576) 0:00:17.471 ********** 2026-04-17 03:58:28.948404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063906 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.063941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199364 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199407 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199478 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.199484 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:29.199496 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316722 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.316981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439664 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:29.439740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439749 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439754 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439788 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439808 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439815 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:29.439836 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:39.764646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:39.764756 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-02-37-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 03:58:39.764786 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:39.764795 | orchestrator | 2026-04-17 03:58:39.764802 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 03:58:39.764810 | orchestrator | Friday 17 April 2026 03:58:29 +0000 (0:00:00.595) 0:00:18.066 ********** 2026-04-17 03:58:39.764816 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:39.764824 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:39.764830 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:39.764836 | orchestrator | 2026-04-17 03:58:39.764858 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 03:58:39.764865 | orchestrator | Friday 17 April 2026 03:58:30 +0000 (0:00:00.903) 0:00:18.970 ********** 2026-04-17 03:58:39.764871 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:39.764878 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:39.764884 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:39.764890 | orchestrator | 2026-04-17 03:58:39.764896 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 03:58:39.764902 | orchestrator | Friday 17 April 2026 03:58:30 +0000 (0:00:00.304) 0:00:19.274 ********** 2026-04-17 03:58:39.764909 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:39.764914 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:39.764921 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:39.764927 | orchestrator | 2026-04-17 03:58:39.764933 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 03:58:39.764939 | orchestrator | Friday 17 April 2026 03:58:31 +0000 (0:00:00.725) 0:00:20.000 ********** 2026-04-17 03:58:39.764958 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.764965 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:39.764970 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:39.764977 | orchestrator | 2026-04-17 03:58:39.764983 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 03:58:39.764990 | orchestrator | Friday 17 April 2026 03:58:31 +0000 (0:00:00.304) 0:00:20.304 ********** 2026-04-17 03:58:39.764996 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765001 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:39.765008 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:39.765012 | orchestrator | 2026-04-17 03:58:39.765016 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 03:58:39.765020 | orchestrator | Friday 17 April 2026 03:58:32 +0000 (0:00:00.680) 0:00:20.985 ********** 2026-04-17 03:58:39.765067 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765072 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:39.765076 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:39.765079 | orchestrator | 2026-04-17 03:58:39.765083 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 03:58:39.765087 | orchestrator | Friday 17 April 2026 03:58:32 +0000 (0:00:00.336) 0:00:21.321 ********** 2026-04-17 03:58:39.765091 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-17 03:58:39.765096 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-17 03:58:39.765099 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-17 03:58:39.765103 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-17 03:58:39.765107 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-17 03:58:39.765111 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-17 03:58:39.765114 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-17 03:58:39.765118 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-17 03:58:39.765129 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-17 03:58:39.765133 | orchestrator | 2026-04-17 03:58:39.765137 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 03:58:39.765141 | orchestrator | Friday 17 April 2026 03:58:33 +0000 (0:00:01.086) 0:00:22.408 ********** 2026-04-17 03:58:39.765156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 03:58:39.765161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 03:58:39.765165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 03:58:39.765168 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765172 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 03:58:39.765176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 03:58:39.765180 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 03:58:39.765183 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:39.765187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 03:58:39.765191 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 03:58:39.765194 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 03:58:39.765198 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:39.765202 | orchestrator | 2026-04-17 03:58:39.765206 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 03:58:39.765209 | orchestrator | Friday 17 April 2026 03:58:34 +0000 (0:00:00.374) 0:00:22.783 ********** 2026-04-17 03:58:39.765214 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 03:58:39.765218 | orchestrator | 2026-04-17 03:58:39.765222 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 03:58:39.765227 | orchestrator | Friday 17 April 2026 03:58:34 +0000 (0:00:00.789) 0:00:23.572 ********** 2026-04-17 03:58:39.765230 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765234 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:39.765238 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:39.765242 | orchestrator | 2026-04-17 03:58:39.765245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 03:58:39.765249 | orchestrator | Friday 17 April 2026 03:58:35 +0000 (0:00:00.350) 0:00:23.923 ********** 2026-04-17 03:58:39.765253 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765257 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:39.765261 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:39.765264 | orchestrator | 2026-04-17 03:58:39.765268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 03:58:39.765272 | orchestrator | Friday 17 April 2026 03:58:35 +0000 (0:00:00.354) 0:00:24.278 ********** 2026-04-17 03:58:39.765275 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765279 | orchestrator | skipping: [testbed-node-4] 2026-04-17 03:58:39.765283 | orchestrator | skipping: [testbed-node-5] 2026-04-17 03:58:39.765286 | orchestrator | 2026-04-17 03:58:39.765290 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 03:58:39.765294 | orchestrator | Friday 17 April 2026 03:58:36 +0000 (0:00:00.536) 0:00:24.815 ********** 2026-04-17 03:58:39.765298 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:39.765301 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:39.765305 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:39.765309 | orchestrator | 2026-04-17 03:58:39.765312 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 03:58:39.765316 | orchestrator | Friday 17 April 2026 03:58:36 +0000 (0:00:00.421) 0:00:25.236 ********** 2026-04-17 03:58:39.765320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:58:39.765324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:58:39.765327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:58:39.765334 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765338 | orchestrator | 2026-04-17 03:58:39.765346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 03:58:39.765350 | orchestrator | Friday 17 April 2026 03:58:36 +0000 (0:00:00.391) 0:00:25.627 ********** 2026-04-17 03:58:39.765353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:58:39.765357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:58:39.765361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:58:39.765365 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765368 | orchestrator | 2026-04-17 03:58:39.765372 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 03:58:39.765376 | orchestrator | Friday 17 April 2026 03:58:37 +0000 (0:00:00.382) 0:00:26.009 ********** 2026-04-17 03:58:39.765379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 03:58:39.765383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 03:58:39.765387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 03:58:39.765390 | orchestrator | skipping: [testbed-node-3] 2026-04-17 03:58:39.765394 | orchestrator | 2026-04-17 03:58:39.765398 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 03:58:39.765402 | orchestrator | Friday 17 April 2026 03:58:37 +0000 (0:00:00.404) 0:00:26.414 ********** 2026-04-17 03:58:39.765405 | orchestrator | ok: [testbed-node-3] 2026-04-17 03:58:39.765409 | orchestrator | ok: [testbed-node-4] 2026-04-17 03:58:39.765413 | orchestrator | ok: [testbed-node-5] 2026-04-17 03:58:39.765416 | orchestrator | 2026-04-17 03:58:39.765420 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 03:58:39.765424 | orchestrator | Friday 17 April 2026 03:58:38 +0000 (0:00:00.342) 0:00:26.756 ********** 2026-04-17 03:58:39.765428 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 03:58:39.765431 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 03:58:39.765435 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 03:58:39.765439 | orchestrator | 2026-04-17 03:58:39.765443 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 03:58:39.765446 | orchestrator | Friday 17 April 2026 03:58:38 +0000 (0:00:00.804) 0:00:27.560 ********** 2026-04-17 03:58:39.765450 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 03:58:39.765457 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 04:00:19.126742 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 04:00:19.126830 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 04:00:19.126839 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 04:00:19.126845 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 04:00:19.126852 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 04:00:19.126858 | orchestrator | 2026-04-17 04:00:19.126864 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 04:00:19.126870 | orchestrator | Friday 17 April 2026 03:58:39 +0000 (0:00:00.825) 0:00:28.386 ********** 2026-04-17 04:00:19.126875 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 04:00:19.126880 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 04:00:19.126885 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 04:00:19.126891 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 04:00:19.126896 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 04:00:19.126901 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 04:00:19.126922 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 04:00:19.126929 | orchestrator | 2026-04-17 04:00:19.126937 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-17 04:00:19.126947 | orchestrator | Friday 17 April 2026 03:58:41 +0000 (0:00:01.695) 0:00:30.082 ********** 2026-04-17 04:00:19.126960 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:00:19.127043 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:00:19.127053 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-17 04:00:19.127060 | orchestrator | 2026-04-17 04:00:19.127068 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-17 04:00:19.127077 | orchestrator | Friday 17 April 2026 03:58:42 +0000 (0:00:00.576) 0:00:30.659 ********** 2026-04-17 04:00:19.127088 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 04:00:19.127098 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 04:00:19.127121 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 04:00:19.127126 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 04:00:19.127132 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 04:00:19.127137 | orchestrator | 2026-04-17 04:00:19.127142 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-17 04:00:19.127147 | orchestrator | Friday 17 April 2026 03:59:27 +0000 (0:00:45.670) 0:01:16.329 ********** 2026-04-17 04:00:19.127152 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127157 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127162 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127167 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127177 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127183 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-17 04:00:19.127188 | orchestrator | 2026-04-17 04:00:19.127193 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-17 04:00:19.127198 | orchestrator | Friday 17 April 2026 03:59:50 +0000 (0:00:23.004) 0:01:39.333 ********** 2026-04-17 04:00:19.127217 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127222 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127235 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127241 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127246 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127251 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127256 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 04:00:19.127261 | orchestrator | 2026-04-17 04:00:19.127266 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-17 04:00:19.127271 | orchestrator | Friday 17 April 2026 04:00:02 +0000 (0:00:11.341) 0:01:50.675 ********** 2026-04-17 04:00:19.127276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127281 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 04:00:19.127286 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 04:00:19.127292 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127298 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 04:00:19.127304 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 04:00:19.127310 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127316 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 04:00:19.127322 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 04:00:19.127328 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127333 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 04:00:19.127339 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 04:00:19.127345 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127353 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 04:00:19.127362 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 04:00:19.127373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 04:00:19.127386 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 04:00:19.127394 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 04:00:19.127402 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-17 04:00:19.127410 | orchestrator | 2026-04-17 04:00:19.127418 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:00:19.127432 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-17 04:00:19.127443 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-17 04:00:19.127453 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-17 04:00:19.127461 | orchestrator | 2026-04-17 04:00:19.127470 | orchestrator | 2026-04-17 04:00:19.127478 | orchestrator | 2026-04-17 04:00:19.127487 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:00:19.127495 | orchestrator | Friday 17 April 2026 04:00:19 +0000 (0:00:17.060) 0:02:07.735 ********** 2026-04-17 04:00:19.127504 | orchestrator | =============================================================================== 2026-04-17 04:00:19.127513 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.67s 2026-04-17 04:00:19.127529 | orchestrator | generate keys ---------------------------------------------------------- 23.00s 2026-04-17 04:00:19.127537 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.06s 2026-04-17 04:00:19.127547 | orchestrator | get keys from monitors ------------------------------------------------- 11.34s 2026-04-17 04:00:19.127553 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2026-04-17 04:00:19.127559 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.70s 2026-04-17 04:00:19.127564 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.60s 2026-04-17 04:00:19.127569 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.09s 2026-04-17 04:00:19.127574 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.08s 2026-04-17 04:00:19.127579 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.90s 2026-04-17 04:00:19.127586 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2026-04-17 04:00:19.127594 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.83s 2026-04-17 04:00:19.127602 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.80s 2026-04-17 04:00:19.127615 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.79s 2026-04-17 04:00:19.536189 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.74s 2026-04-17 04:00:19.536326 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.73s 2026-04-17 04:00:19.536355 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-04-17 04:00:19.536376 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-04-17 04:00:19.536396 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-04-17 04:00:19.536415 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2026-04-17 04:00:22.091923 | orchestrator | 2026-04-17 04:00:22 | INFO  | Task 9749c98e-0125-4cd4-a89f-142dd406cbde (copy-ceph-keys) was prepared for execution. 2026-04-17 04:00:22.092094 | orchestrator | 2026-04-17 04:00:22 | INFO  | It takes a moment until task 9749c98e-0125-4cd4-a89f-142dd406cbde (copy-ceph-keys) has been started and output is visible here. 2026-04-17 04:01:00.794386 | orchestrator | 2026-04-17 04:01:00.794526 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-17 04:01:00.794551 | orchestrator | 2026-04-17 04:01:00.794568 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-17 04:01:00.794585 | orchestrator | Friday 17 April 2026 04:00:26 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-04-17 04:01:00.794600 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-17 04:01:00.794618 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.794633 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.794649 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 04:01:00.794666 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.794682 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-17 04:01:00.794699 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-17 04:01:00.794715 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-17 04:01:00.794731 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-17 04:01:00.794779 | orchestrator | 2026-04-17 04:01:00.794798 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-17 04:01:00.794814 | orchestrator | Friday 17 April 2026 04:00:31 +0000 (0:00:04.565) 0:00:04.738 ********** 2026-04-17 04:01:00.794830 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-17 04:01:00.794848 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.794882 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.794899 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 04:01:00.794914 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.794928 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-17 04:01:00.794945 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-17 04:01:00.795061 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-17 04:01:00.795078 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-17 04:01:00.795094 | orchestrator | 2026-04-17 04:01:00.795110 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-17 04:01:00.795125 | orchestrator | Friday 17 April 2026 04:00:35 +0000 (0:00:04.106) 0:00:08.844 ********** 2026-04-17 04:01:00.795142 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 04:01:00.795158 | orchestrator | 2026-04-17 04:01:00.795174 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-17 04:01:00.795191 | orchestrator | Friday 17 April 2026 04:00:36 +0000 (0:00:01.012) 0:00:09.857 ********** 2026-04-17 04:01:00.795208 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-17 04:01:00.795227 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.795243 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.795260 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 04:01:00.795278 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.795294 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-17 04:01:00.795311 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-17 04:01:00.795328 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-17 04:01:00.795344 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-17 04:01:00.795381 | orchestrator | 2026-04-17 04:01:00.795411 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-17 04:01:00.795426 | orchestrator | Friday 17 April 2026 04:00:49 +0000 (0:00:13.722) 0:00:23.579 ********** 2026-04-17 04:01:00.795441 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-17 04:01:00.795457 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-17 04:01:00.795474 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-17 04:01:00.795491 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-17 04:01:00.795528 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-17 04:01:00.795539 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-17 04:01:00.795568 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-17 04:01:00.795581 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-17 04:01:00.795598 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-17 04:01:00.795617 | orchestrator | 2026-04-17 04:01:00.795629 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-17 04:01:00.795642 | orchestrator | Friday 17 April 2026 04:00:53 +0000 (0:00:03.289) 0:00:26.869 ********** 2026-04-17 04:01:00.795656 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-17 04:01:00.795669 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.795680 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.795693 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 04:01:00.795704 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-17 04:01:00.795716 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-17 04:01:00.795728 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-17 04:01:00.795740 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-17 04:01:00.795751 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-17 04:01:00.795764 | orchestrator | 2026-04-17 04:01:00.795778 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:01:00.795792 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:01:00.795807 | orchestrator | 2026-04-17 04:01:00.795821 | orchestrator | 2026-04-17 04:01:00.795847 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:01:00.795862 | orchestrator | Friday 17 April 2026 04:01:00 +0000 (0:00:07.274) 0:00:34.144 ********** 2026-04-17 04:01:00.795875 | orchestrator | =============================================================================== 2026-04-17 04:01:00.795888 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.72s 2026-04-17 04:01:00.795900 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.27s 2026-04-17 04:01:00.795914 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.57s 2026-04-17 04:01:00.795928 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.11s 2026-04-17 04:01:00.795936 | orchestrator | Check if target directories exist --------------------------------------- 3.29s 2026-04-17 04:01:00.795944 | orchestrator | Create share directory -------------------------------------------------- 1.01s 2026-04-17 04:01:13.214419 | orchestrator | 2026-04-17 04:01:13 | INFO  | Task 7d080948-5fce-4430-afdd-561f0cd4bbff (cephclient) was prepared for execution. 2026-04-17 04:01:13.214549 | orchestrator | 2026-04-17 04:01:13 | INFO  | It takes a moment until task 7d080948-5fce-4430-afdd-561f0cd4bbff (cephclient) has been started and output is visible here. 2026-04-17 04:02:15.770290 | orchestrator | 2026-04-17 04:02:15.770445 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-17 04:02:15.770467 | orchestrator | 2026-04-17 04:02:15.770480 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-17 04:02:15.770494 | orchestrator | Friday 17 April 2026 04:01:17 +0000 (0:00:00.247) 0:00:00.247 ********** 2026-04-17 04:02:15.770508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-17 04:02:15.770524 | orchestrator | 2026-04-17 04:02:15.770540 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-17 04:02:15.770590 | orchestrator | Friday 17 April 2026 04:01:17 +0000 (0:00:00.256) 0:00:00.503 ********** 2026-04-17 04:02:15.770606 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-17 04:02:15.770618 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-17 04:02:15.770627 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-17 04:02:15.770636 | orchestrator | 2026-04-17 04:02:15.770644 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-17 04:02:15.770652 | orchestrator | Friday 17 April 2026 04:01:19 +0000 (0:00:01.246) 0:00:01.750 ********** 2026-04-17 04:02:15.770661 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-17 04:02:15.770669 | orchestrator | 2026-04-17 04:02:15.770677 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-17 04:02:15.770685 | orchestrator | Friday 17 April 2026 04:01:20 +0000 (0:00:01.456) 0:00:03.206 ********** 2026-04-17 04:02:15.770693 | orchestrator | changed: [testbed-manager] 2026-04-17 04:02:15.770702 | orchestrator | 2026-04-17 04:02:15.770710 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-17 04:02:15.770717 | orchestrator | Friday 17 April 2026 04:01:21 +0000 (0:00:00.934) 0:00:04.140 ********** 2026-04-17 04:02:15.770725 | orchestrator | changed: [testbed-manager] 2026-04-17 04:02:15.770733 | orchestrator | 2026-04-17 04:02:15.770741 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-17 04:02:15.770749 | orchestrator | Friday 17 April 2026 04:01:22 +0000 (0:00:00.952) 0:00:05.093 ********** 2026-04-17 04:02:15.770757 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-17 04:02:15.770765 | orchestrator | ok: [testbed-manager] 2026-04-17 04:02:15.770773 | orchestrator | 2026-04-17 04:02:15.770781 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-17 04:02:15.770788 | orchestrator | Friday 17 April 2026 04:02:05 +0000 (0:00:43.177) 0:00:48.270 ********** 2026-04-17 04:02:15.770797 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-17 04:02:15.770806 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-17 04:02:15.770816 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-17 04:02:15.770825 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-17 04:02:15.770838 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-17 04:02:15.770852 | orchestrator | 2026-04-17 04:02:15.770866 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-17 04:02:15.770879 | orchestrator | Friday 17 April 2026 04:02:09 +0000 (0:00:04.104) 0:00:52.375 ********** 2026-04-17 04:02:15.770892 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-17 04:02:15.770906 | orchestrator | 2026-04-17 04:02:15.770919 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-17 04:02:15.770971 | orchestrator | Friday 17 April 2026 04:02:10 +0000 (0:00:00.474) 0:00:52.850 ********** 2026-04-17 04:02:15.770986 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:02:15.770999 | orchestrator | 2026-04-17 04:02:15.771012 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-17 04:02:15.771027 | orchestrator | Friday 17 April 2026 04:02:10 +0000 (0:00:00.150) 0:00:53.000 ********** 2026-04-17 04:02:15.771041 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:02:15.771054 | orchestrator | 2026-04-17 04:02:15.771068 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-17 04:02:15.771078 | orchestrator | Friday 17 April 2026 04:02:10 +0000 (0:00:00.599) 0:00:53.600 ********** 2026-04-17 04:02:15.771087 | orchestrator | changed: [testbed-manager] 2026-04-17 04:02:15.771096 | orchestrator | 2026-04-17 04:02:15.771105 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-17 04:02:15.771132 | orchestrator | Friday 17 April 2026 04:02:12 +0000 (0:00:01.691) 0:00:55.291 ********** 2026-04-17 04:02:15.771142 | orchestrator | changed: [testbed-manager] 2026-04-17 04:02:15.771167 | orchestrator | 2026-04-17 04:02:15.771176 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-17 04:02:15.771183 | orchestrator | Friday 17 April 2026 04:02:13 +0000 (0:00:00.691) 0:00:55.983 ********** 2026-04-17 04:02:15.771194 | orchestrator | changed: [testbed-manager] 2026-04-17 04:02:15.771207 | orchestrator | 2026-04-17 04:02:15.771220 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-17 04:02:15.771232 | orchestrator | Friday 17 April 2026 04:02:13 +0000 (0:00:00.608) 0:00:56.591 ********** 2026-04-17 04:02:15.771246 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-17 04:02:15.771257 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-17 04:02:15.771268 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-17 04:02:15.771278 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-17 04:02:15.771290 | orchestrator | 2026-04-17 04:02:15.771303 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:02:15.771319 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 04:02:15.771334 | orchestrator | 2026-04-17 04:02:15.771347 | orchestrator | 2026-04-17 04:02:15.771386 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:02:15.771395 | orchestrator | Friday 17 April 2026 04:02:15 +0000 (0:00:01.477) 0:00:58.069 ********** 2026-04-17 04:02:15.771403 | orchestrator | =============================================================================== 2026-04-17 04:02:15.771410 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.18s 2026-04-17 04:02:15.771418 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.10s 2026-04-17 04:02:15.771426 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.69s 2026-04-17 04:02:15.771434 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2026-04-17 04:02:15.771442 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.46s 2026-04-17 04:02:15.771449 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2026-04-17 04:02:15.771457 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-04-17 04:02:15.771465 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.93s 2026-04-17 04:02:15.771473 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-04-17 04:02:15.771480 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-04-17 04:02:15.771493 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.60s 2026-04-17 04:02:15.771506 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-04-17 04:02:15.771519 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-04-17 04:02:15.771532 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-04-17 04:02:18.203124 | orchestrator | 2026-04-17 04:02:18 | INFO  | Task 255eb616-6013-46fc-9cd0-01839a7f3b2b (ceph-bootstrap-dashboard) was prepared for execution. 2026-04-17 04:02:18.203270 | orchestrator | 2026-04-17 04:02:18 | INFO  | It takes a moment until task 255eb616-6013-46fc-9cd0-01839a7f3b2b (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-04-17 04:03:36.486855 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 04:03:36.486989 | orchestrator | 2.16.14 2026-04-17 04:03:36.486999 | orchestrator | 2026-04-17 04:03:36.487005 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-17 04:03:36.487011 | orchestrator | 2026-04-17 04:03:36.487016 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-17 04:03:36.487021 | orchestrator | Friday 17 April 2026 04:02:22 +0000 (0:00:00.266) 0:00:00.266 ********** 2026-04-17 04:03:36.487045 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487051 | orchestrator | 2026-04-17 04:03:36.487056 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-17 04:03:36.487061 | orchestrator | Friday 17 April 2026 04:02:24 +0000 (0:00:01.843) 0:00:02.110 ********** 2026-04-17 04:03:36.487066 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487071 | orchestrator | 2026-04-17 04:03:36.487076 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-17 04:03:36.487081 | orchestrator | Friday 17 April 2026 04:02:25 +0000 (0:00:01.038) 0:00:03.148 ********** 2026-04-17 04:03:36.487086 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487090 | orchestrator | 2026-04-17 04:03:36.487095 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-17 04:03:36.487100 | orchestrator | Friday 17 April 2026 04:02:26 +0000 (0:00:01.039) 0:00:04.188 ********** 2026-04-17 04:03:36.487105 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487110 | orchestrator | 2026-04-17 04:03:36.487115 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-17 04:03:36.487120 | orchestrator | Friday 17 April 2026 04:02:27 +0000 (0:00:01.210) 0:00:05.398 ********** 2026-04-17 04:03:36.487124 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487129 | orchestrator | 2026-04-17 04:03:36.487134 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-17 04:03:36.487139 | orchestrator | Friday 17 April 2026 04:02:28 +0000 (0:00:01.086) 0:00:06.484 ********** 2026-04-17 04:03:36.487143 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487148 | orchestrator | 2026-04-17 04:03:36.487164 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-17 04:03:36.487170 | orchestrator | Friday 17 April 2026 04:02:29 +0000 (0:00:01.118) 0:00:07.603 ********** 2026-04-17 04:03:36.487175 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487180 | orchestrator | 2026-04-17 04:03:36.487184 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-17 04:03:36.487189 | orchestrator | Friday 17 April 2026 04:02:32 +0000 (0:00:02.080) 0:00:09.683 ********** 2026-04-17 04:03:36.487194 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487199 | orchestrator | 2026-04-17 04:03:36.487204 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-17 04:03:36.487208 | orchestrator | Friday 17 April 2026 04:02:33 +0000 (0:00:01.146) 0:00:10.829 ********** 2026-04-17 04:03:36.487213 | orchestrator | changed: [testbed-manager] 2026-04-17 04:03:36.487218 | orchestrator | 2026-04-17 04:03:36.487222 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-17 04:03:36.487227 | orchestrator | Friday 17 April 2026 04:03:12 +0000 (0:00:38.942) 0:00:49.771 ********** 2026-04-17 04:03:36.487232 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:03:36.487237 | orchestrator | 2026-04-17 04:03:36.487241 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-17 04:03:36.487246 | orchestrator | 2026-04-17 04:03:36.487259 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-17 04:03:36.487263 | orchestrator | Friday 17 April 2026 04:03:12 +0000 (0:00:00.182) 0:00:49.954 ********** 2026-04-17 04:03:36.487268 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:03:36.487273 | orchestrator | 2026-04-17 04:03:36.487278 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-17 04:03:36.487282 | orchestrator | 2026-04-17 04:03:36.487287 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-17 04:03:36.487292 | orchestrator | Friday 17 April 2026 04:03:23 +0000 (0:00:11.586) 0:01:01.541 ********** 2026-04-17 04:03:36.487296 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:03:36.487301 | orchestrator | 2026-04-17 04:03:36.487306 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-17 04:03:36.487311 | orchestrator | 2026-04-17 04:03:36.487315 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-17 04:03:36.487325 | orchestrator | Friday 17 April 2026 04:03:34 +0000 (0:00:11.089) 0:01:12.631 ********** 2026-04-17 04:03:36.487330 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:03:36.487335 | orchestrator | 2026-04-17 04:03:36.487340 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:03:36.487346 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 04:03:36.487352 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:03:36.487358 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:03:36.487366 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:03:36.487374 | orchestrator | 2026-04-17 04:03:36.487385 | orchestrator | 2026-04-17 04:03:36.487397 | orchestrator | 2026-04-17 04:03:36.487404 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:03:36.487413 | orchestrator | Friday 17 April 2026 04:03:36 +0000 (0:00:01.160) 0:01:13.791 ********** 2026-04-17 04:03:36.487421 | orchestrator | =============================================================================== 2026-04-17 04:03:36.487430 | orchestrator | Create admin user ------------------------------------------------------ 38.94s 2026-04-17 04:03:36.487453 | orchestrator | Restart ceph manager service ------------------------------------------- 23.84s 2026-04-17 04:03:36.487462 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-04-17 04:03:36.487471 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.84s 2026-04-17 04:03:36.487479 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.21s 2026-04-17 04:03:36.487489 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.15s 2026-04-17 04:03:36.487495 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.12s 2026-04-17 04:03:36.487501 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.09s 2026-04-17 04:03:36.487507 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.04s 2026-04-17 04:03:36.487515 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.04s 2026-04-17 04:03:36.487524 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-04-17 04:03:36.699460 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-04-17 04:03:38.496193 | orchestrator | 2026-04-17 04:03:38 | INFO  | Task b149e0f1-8647-487a-ad29-1fa82056f6f6 (keystone) was prepared for execution. 2026-04-17 04:03:38.496307 | orchestrator | 2026-04-17 04:03:38 | INFO  | It takes a moment until task b149e0f1-8647-487a-ad29-1fa82056f6f6 (keystone) has been started and output is visible here. 2026-04-17 04:03:45.230254 | orchestrator | 2026-04-17 04:03:45.230349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:03:45.230364 | orchestrator | 2026-04-17 04:03:45.230372 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:03:45.230381 | orchestrator | Friday 17 April 2026 04:03:42 +0000 (0:00:00.258) 0:00:00.258 ********** 2026-04-17 04:03:45.230390 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:03:45.230396 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:03:45.230414 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:03:45.230419 | orchestrator | 2026-04-17 04:03:45.230424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:03:45.230429 | orchestrator | Friday 17 April 2026 04:03:42 +0000 (0:00:00.291) 0:00:00.549 ********** 2026-04-17 04:03:45.230433 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-17 04:03:45.230456 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-17 04:03:45.230460 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-17 04:03:45.230465 | orchestrator | 2026-04-17 04:03:45.230470 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-17 04:03:45.230474 | orchestrator | 2026-04-17 04:03:45.230479 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 04:03:45.230483 | orchestrator | Friday 17 April 2026 04:03:43 +0000 (0:00:00.407) 0:00:00.957 ********** 2026-04-17 04:03:45.230488 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:03:45.230494 | orchestrator | 2026-04-17 04:03:45.230498 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-17 04:03:45.230503 | orchestrator | Friday 17 April 2026 04:03:43 +0000 (0:00:00.543) 0:00:01.501 ********** 2026-04-17 04:03:45.230511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:45.230518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:45.230537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:45.230552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:45.230565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:45.230570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:45.230578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:45.230586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:45.230594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:45.230601 | orchestrator | 2026-04-17 04:03:45.230610 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-17 04:03:45.230629 | orchestrator | Friday 17 April 2026 04:03:45 +0000 (0:00:01.554) 0:00:03.055 ********** 2026-04-17 04:03:50.517715 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:03:50.517798 | orchestrator | 2026-04-17 04:03:50.517804 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-17 04:03:50.517810 | orchestrator | Friday 17 April 2026 04:03:45 +0000 (0:00:00.270) 0:00:03.326 ********** 2026-04-17 04:03:50.517814 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:03:50.517818 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:03:50.517833 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:03:50.517837 | orchestrator | 2026-04-17 04:03:50.517841 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-17 04:03:50.517845 | orchestrator | Friday 17 April 2026 04:03:45 +0000 (0:00:00.310) 0:00:03.637 ********** 2026-04-17 04:03:50.517849 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:03:50.517854 | orchestrator | 2026-04-17 04:03:50.517858 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 04:03:50.517862 | orchestrator | Friday 17 April 2026 04:03:46 +0000 (0:00:00.771) 0:00:04.408 ********** 2026-04-17 04:03:50.517866 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:03:50.517871 | orchestrator | 2026-04-17 04:03:50.517875 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-17 04:03:50.517878 | orchestrator | Friday 17 April 2026 04:03:47 +0000 (0:00:00.509) 0:00:04.917 ********** 2026-04-17 04:03:50.517885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:50.517955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:50.517963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:50.517998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:50.518004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:50.518009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:50.518045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:50.518050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:50.518054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:50.518062 | orchestrator | 2026-04-17 04:03:50.518066 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-17 04:03:50.518078 | orchestrator | Friday 17 April 2026 04:03:49 +0000 (0:00:02.896) 0:00:07.814 ********** 2026-04-17 04:03:50.518094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:03:51.292286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:03:51.292461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:03:51.292478 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:03:51.292489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:03:51.292518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:03:51.292530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:03:51.292536 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:03:51.292559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:03:51.292567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:03:51.292573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:03:51.292579 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:03:51.292585 | orchestrator | 2026-04-17 04:03:51.292592 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-17 04:03:51.292605 | orchestrator | Friday 17 April 2026 04:03:50 +0000 (0:00:00.533) 0:00:08.348 ********** 2026-04-17 04:03:51.292611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:03:51.292622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:03:51.292635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:03:54.392235 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:03:54.392328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:03:54.392342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:03:54.392371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:03:54.392379 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:03:54.392399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:03:54.392441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:03:54.392481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:03:54.392494 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:03:54.392505 | orchestrator | 2026-04-17 04:03:54.392516 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-17 04:03:54.392528 | orchestrator | Friday 17 April 2026 04:03:51 +0000 (0:00:00.775) 0:00:09.123 ********** 2026-04-17 04:03:54.392539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:54.392562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:54.392582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:54.392605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:58.740437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:58.740548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:03:58.740558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:58.740565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:58.740590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:03:58.740596 | orchestrator | 2026-04-17 04:03:58.740604 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-17 04:03:58.740612 | orchestrator | Friday 17 April 2026 04:03:54 +0000 (0:00:03.095) 0:00:12.219 ********** 2026-04-17 04:03:58.740634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:58.740641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:03:58.740654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:58.740662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:03:58.740673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:03:58.740683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:04:02.092441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:04:02.092534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:04:02.092541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:04:02.092546 | orchestrator | 2026-04-17 04:04:02.092551 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-17 04:04:02.092557 | orchestrator | Friday 17 April 2026 04:03:58 +0000 (0:00:04.346) 0:00:16.566 ********** 2026-04-17 04:04:02.092561 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:04:02.092566 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:04:02.092569 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:04:02.092573 | orchestrator | 2026-04-17 04:04:02.092577 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-17 04:04:02.092581 | orchestrator | Friday 17 April 2026 04:04:00 +0000 (0:00:01.397) 0:00:17.964 ********** 2026-04-17 04:04:02.092585 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:04:02.092589 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:04:02.092592 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:04:02.092596 | orchestrator | 2026-04-17 04:04:02.092600 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-17 04:04:02.092604 | orchestrator | Friday 17 April 2026 04:04:00 +0000 (0:00:00.661) 0:00:18.625 ********** 2026-04-17 04:04:02.092608 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:04:02.092611 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:04:02.092615 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:04:02.092619 | orchestrator | 2026-04-17 04:04:02.092623 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-17 04:04:02.092627 | orchestrator | Friday 17 April 2026 04:04:01 +0000 (0:00:00.419) 0:00:19.045 ********** 2026-04-17 04:04:02.092642 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:04:02.092646 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:04:02.092650 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:04:02.092654 | orchestrator | 2026-04-17 04:04:02.092658 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-17 04:04:02.092662 | orchestrator | Friday 17 April 2026 04:04:01 +0000 (0:00:00.288) 0:00:19.333 ********** 2026-04-17 04:04:02.092679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:04:02.092690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:04:02.092695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:04:02.092699 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:04:02.092703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:04:02.092710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:04:02.092715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:04:02.092724 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:04:02.092734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 04:04:20.270624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 04:04:20.270737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 04:04:20.270751 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:04:20.270762 | orchestrator | 2026-04-17 04:04:20.270772 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 04:04:20.270782 | orchestrator | Friday 17 April 2026 04:04:02 +0000 (0:00:00.585) 0:00:19.918 ********** 2026-04-17 04:04:20.270791 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:04:20.270800 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:04:20.270808 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:04:20.270816 | orchestrator | 2026-04-17 04:04:20.270825 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-17 04:04:20.270834 | orchestrator | Friday 17 April 2026 04:04:02 +0000 (0:00:00.303) 0:00:20.222 ********** 2026-04-17 04:04:20.270843 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 04:04:20.270853 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 04:04:20.270862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 04:04:20.270870 | orchestrator | 2026-04-17 04:04:20.270956 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-17 04:04:20.270967 | orchestrator | Friday 17 April 2026 04:04:04 +0000 (0:00:01.821) 0:00:22.043 ********** 2026-04-17 04:04:20.270990 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:04:20.270999 | orchestrator | 2026-04-17 04:04:20.271008 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-17 04:04:20.271016 | orchestrator | Friday 17 April 2026 04:04:05 +0000 (0:00:00.905) 0:00:22.949 ********** 2026-04-17 04:04:20.271024 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:04:20.271032 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:04:20.271040 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:04:20.271047 | orchestrator | 2026-04-17 04:04:20.271055 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-17 04:04:20.271064 | orchestrator | Friday 17 April 2026 04:04:05 +0000 (0:00:00.554) 0:00:23.503 ********** 2026-04-17 04:04:20.271071 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 04:04:20.271080 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:04:20.271088 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 04:04:20.271096 | orchestrator | 2026-04-17 04:04:20.271105 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-17 04:04:20.271114 | orchestrator | Friday 17 April 2026 04:04:06 +0000 (0:00:01.034) 0:00:24.538 ********** 2026-04-17 04:04:20.271122 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:04:20.271132 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:04:20.271140 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:04:20.271148 | orchestrator | 2026-04-17 04:04:20.271157 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-17 04:04:20.271165 | orchestrator | Friday 17 April 2026 04:04:07 +0000 (0:00:00.537) 0:00:25.076 ********** 2026-04-17 04:04:20.271175 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 04:04:20.271185 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 04:04:20.271194 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 04:04:20.271203 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 04:04:20.271212 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 04:04:20.271221 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 04:04:20.271231 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 04:04:20.271243 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 04:04:20.271271 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 04:04:20.271281 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 04:04:20.271290 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 04:04:20.271300 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 04:04:20.271310 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 04:04:20.271319 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 04:04:20.271329 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 04:04:20.271340 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 04:04:20.271349 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 04:04:20.271371 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 04:04:20.271380 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 04:04:20.271390 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 04:04:20.271399 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 04:04:20.271407 | orchestrator | 2026-04-17 04:04:20.271418 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-17 04:04:20.271427 | orchestrator | Friday 17 April 2026 04:04:15 +0000 (0:00:08.517) 0:00:33.594 ********** 2026-04-17 04:04:20.271437 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 04:04:20.271446 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 04:04:20.271455 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 04:04:20.271464 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 04:04:20.271473 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 04:04:20.271482 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 04:04:20.271490 | orchestrator | 2026-04-17 04:04:20.271499 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-17 04:04:20.271507 | orchestrator | Friday 17 April 2026 04:04:18 +0000 (0:00:02.306) 0:00:35.900 ********** 2026-04-17 04:04:20.271526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:04:20.271548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:06:04.660111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 04:06:04.660261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:06:04.660294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:06:04.660307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 04:06:04.660318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:06:04.660348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:06:04.660370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 04:06:04.660382 | orchestrator | 2026-04-17 04:06:04.660395 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 04:06:04.660408 | orchestrator | Friday 17 April 2026 04:04:20 +0000 (0:00:02.194) 0:00:38.095 ********** 2026-04-17 04:06:04.660419 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:06:04.660431 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:06:04.660442 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:06:04.660453 | orchestrator | 2026-04-17 04:06:04.660464 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-17 04:06:04.660475 | orchestrator | Friday 17 April 2026 04:04:20 +0000 (0:00:00.576) 0:00:38.672 ********** 2026-04-17 04:06:04.660486 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:06:04.660497 | orchestrator | 2026-04-17 04:06:04.660507 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-17 04:06:04.660518 | orchestrator | Friday 17 April 2026 04:04:23 +0000 (0:00:02.194) 0:00:40.867 ********** 2026-04-17 04:06:04.660529 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:06:04.660540 | orchestrator | 2026-04-17 04:06:04.660551 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-17 04:06:04.660561 | orchestrator | Friday 17 April 2026 04:04:25 +0000 (0:00:02.096) 0:00:42.963 ********** 2026-04-17 04:06:04.660580 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:06:04.660598 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:06:04.660627 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:06:04.660649 | orchestrator | 2026-04-17 04:06:04.660666 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-17 04:06:04.660684 | orchestrator | Friday 17 April 2026 04:04:26 +0000 (0:00:00.936) 0:00:43.900 ********** 2026-04-17 04:06:04.660702 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:06:04.660720 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:06:04.660739 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:06:04.660758 | orchestrator | 2026-04-17 04:06:04.660776 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-17 04:06:04.660792 | orchestrator | Friday 17 April 2026 04:04:26 +0000 (0:00:00.321) 0:00:44.222 ********** 2026-04-17 04:06:04.660803 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:06:04.660822 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:06:04.660833 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:06:04.660844 | orchestrator | 2026-04-17 04:06:04.660889 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-17 04:06:04.660901 | orchestrator | Friday 17 April 2026 04:04:26 +0000 (0:00:00.533) 0:00:44.756 ********** 2026-04-17 04:06:04.660912 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:06:04.660923 | orchestrator | 2026-04-17 04:06:04.660934 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-17 04:06:04.660945 | orchestrator | Friday 17 April 2026 04:04:40 +0000 (0:00:13.966) 0:00:58.723 ********** 2026-04-17 04:06:04.660955 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:06:04.660966 | orchestrator | 2026-04-17 04:06:04.660977 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 04:06:04.660988 | orchestrator | Friday 17 April 2026 04:04:50 +0000 (0:00:09.772) 0:01:08.495 ********** 2026-04-17 04:06:04.661000 | orchestrator | 2026-04-17 04:06:04.661019 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 04:06:04.661059 | orchestrator | Friday 17 April 2026 04:04:50 +0000 (0:00:00.079) 0:01:08.575 ********** 2026-04-17 04:06:04.661081 | orchestrator | 2026-04-17 04:06:04.661098 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 04:06:04.661116 | orchestrator | Friday 17 April 2026 04:04:50 +0000 (0:00:00.076) 0:01:08.651 ********** 2026-04-17 04:06:04.661133 | orchestrator | 2026-04-17 04:06:04.661151 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-17 04:06:04.661167 | orchestrator | Friday 17 April 2026 04:04:50 +0000 (0:00:00.085) 0:01:08.736 ********** 2026-04-17 04:06:04.661187 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:06:04.661202 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:06:04.661219 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:06:04.661238 | orchestrator | 2026-04-17 04:06:04.661257 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-17 04:06:04.661275 | orchestrator | Friday 17 April 2026 04:05:41 +0000 (0:00:50.832) 0:01:59.569 ********** 2026-04-17 04:06:04.661294 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:06:04.661314 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:06:04.661333 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:06:04.661352 | orchestrator | 2026-04-17 04:06:04.661366 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-17 04:06:04.661376 | orchestrator | Friday 17 April 2026 04:05:51 +0000 (0:00:10.144) 0:02:09.714 ********** 2026-04-17 04:06:04.661387 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:06:04.661398 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:06:04.661409 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:06:04.661419 | orchestrator | 2026-04-17 04:06:04.661430 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 04:06:04.661441 | orchestrator | Friday 17 April 2026 04:06:04 +0000 (0:00:12.153) 0:02:21.868 ********** 2026-04-17 04:06:04.661464 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:06:54.210987 | orchestrator | 2026-04-17 04:06:54.211109 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-17 04:06:54.211129 | orchestrator | Friday 17 April 2026 04:06:04 +0000 (0:00:00.619) 0:02:22.488 ********** 2026-04-17 04:06:54.211138 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:06:54.211146 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:06:54.211154 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:06:54.211211 | orchestrator | 2026-04-17 04:06:54.211221 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-17 04:06:54.211229 | orchestrator | Friday 17 April 2026 04:06:05 +0000 (0:00:01.318) 0:02:23.806 ********** 2026-04-17 04:06:54.211236 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:06:54.211244 | orchestrator | 2026-04-17 04:06:54.211251 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-17 04:06:54.211257 | orchestrator | Friday 17 April 2026 04:06:07 +0000 (0:00:01.853) 0:02:25.659 ********** 2026-04-17 04:06:54.211265 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-17 04:06:54.211272 | orchestrator | 2026-04-17 04:06:54.211279 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-17 04:06:54.211286 | orchestrator | Friday 17 April 2026 04:06:18 +0000 (0:00:10.679) 0:02:36.339 ********** 2026-04-17 04:06:54.211293 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-17 04:06:54.211301 | orchestrator | 2026-04-17 04:06:54.211309 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-17 04:06:54.211317 | orchestrator | Friday 17 April 2026 04:06:42 +0000 (0:00:24.267) 0:03:00.607 ********** 2026-04-17 04:06:54.211324 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-17 04:06:54.211334 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-17 04:06:54.211341 | orchestrator | 2026-04-17 04:06:54.211369 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-17 04:06:54.211377 | orchestrator | Friday 17 April 2026 04:06:49 +0000 (0:00:06.485) 0:03:07.093 ********** 2026-04-17 04:06:54.211384 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:06:54.211391 | orchestrator | 2026-04-17 04:06:54.211398 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-17 04:06:54.211405 | orchestrator | Friday 17 April 2026 04:06:49 +0000 (0:00:00.143) 0:03:07.236 ********** 2026-04-17 04:06:54.211412 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:06:54.211419 | orchestrator | 2026-04-17 04:06:54.211426 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-17 04:06:54.211433 | orchestrator | Friday 17 April 2026 04:06:49 +0000 (0:00:00.139) 0:03:07.376 ********** 2026-04-17 04:06:54.211440 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:06:54.211446 | orchestrator | 2026-04-17 04:06:54.211453 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-17 04:06:54.211473 | orchestrator | Friday 17 April 2026 04:06:49 +0000 (0:00:00.158) 0:03:07.534 ********** 2026-04-17 04:06:54.211480 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:06:54.211487 | orchestrator | 2026-04-17 04:06:54.211493 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-17 04:06:54.211499 | orchestrator | Friday 17 April 2026 04:06:50 +0000 (0:00:00.583) 0:03:08.117 ********** 2026-04-17 04:06:54.211505 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:06:54.211512 | orchestrator | 2026-04-17 04:06:54.211519 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 04:06:54.211525 | orchestrator | Friday 17 April 2026 04:06:53 +0000 (0:00:03.041) 0:03:11.158 ********** 2026-04-17 04:06:54.211532 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:06:54.211538 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:06:54.211545 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:06:54.211555 | orchestrator | 2026-04-17 04:06:54.211561 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:06:54.211568 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 04:06:54.211577 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 04:06:54.211584 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 04:06:54.211590 | orchestrator | 2026-04-17 04:06:54.211596 | orchestrator | 2026-04-17 04:06:54.211602 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:06:54.211609 | orchestrator | Friday 17 April 2026 04:06:53 +0000 (0:00:00.480) 0:03:11.639 ********** 2026-04-17 04:06:54.211616 | orchestrator | =============================================================================== 2026-04-17 04:06:54.211622 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 50.83s 2026-04-17 04:06:54.211629 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.27s 2026-04-17 04:06:54.211637 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.97s 2026-04-17 04:06:54.211643 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.15s 2026-04-17 04:06:54.211650 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.68s 2026-04-17 04:06:54.211657 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.14s 2026-04-17 04:06:54.211663 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.77s 2026-04-17 04:06:54.211670 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.52s 2026-04-17 04:06:54.211677 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.49s 2026-04-17 04:06:54.211713 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.35s 2026-04-17 04:06:54.211722 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.10s 2026-04-17 04:06:54.211728 | orchestrator | keystone : Creating default user role ----------------------------------- 3.04s 2026-04-17 04:06:54.211734 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.90s 2026-04-17 04:06:54.211740 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.31s 2026-04-17 04:06:54.211746 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.19s 2026-04-17 04:06:54.211752 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.19s 2026-04-17 04:06:54.211759 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.10s 2026-04-17 04:06:54.211765 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2026-04-17 04:06:54.211772 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.82s 2026-04-17 04:06:54.211779 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.55s 2026-04-17 04:06:57.800365 | orchestrator | 2026-04-17 04:06:57 | INFO  | Task 2fc66c53-c4ba-4096-97de-464195c74897 (placement) was prepared for execution. 2026-04-17 04:06:57.800485 | orchestrator | 2026-04-17 04:06:57 | INFO  | It takes a moment until task 2fc66c53-c4ba-4096-97de-464195c74897 (placement) has been started and output is visible here. 2026-04-17 04:07:31.897753 | orchestrator | 2026-04-17 04:07:31.897892 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:07:31.897902 | orchestrator | 2026-04-17 04:07:31.897907 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:07:31.897912 | orchestrator | Friday 17 April 2026 04:07:02 +0000 (0:00:00.283) 0:00:00.283 ********** 2026-04-17 04:07:31.897917 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:07:31.897922 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:07:31.897927 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:07:31.897932 | orchestrator | 2026-04-17 04:07:31.897936 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:07:31.897940 | orchestrator | Friday 17 April 2026 04:07:02 +0000 (0:00:00.313) 0:00:00.597 ********** 2026-04-17 04:07:31.897945 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-17 04:07:31.897950 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-17 04:07:31.897954 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-17 04:07:31.897958 | orchestrator | 2026-04-17 04:07:31.897962 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-17 04:07:31.897967 | orchestrator | 2026-04-17 04:07:31.897988 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 04:07:31.897997 | orchestrator | Friday 17 April 2026 04:07:02 +0000 (0:00:00.451) 0:00:01.048 ********** 2026-04-17 04:07:31.898006 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:07:31.898049 | orchestrator | 2026-04-17 04:07:31.898057 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-17 04:07:31.898064 | orchestrator | Friday 17 April 2026 04:07:03 +0000 (0:00:00.574) 0:00:01.623 ********** 2026-04-17 04:07:31.898070 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-17 04:07:31.898077 | orchestrator | 2026-04-17 04:07:31.898083 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-17 04:07:31.898090 | orchestrator | Friday 17 April 2026 04:07:07 +0000 (0:00:03.689) 0:00:05.313 ********** 2026-04-17 04:07:31.898097 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-17 04:07:31.898104 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-17 04:07:31.898112 | orchestrator | 2026-04-17 04:07:31.898141 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-17 04:07:31.898150 | orchestrator | Friday 17 April 2026 04:07:13 +0000 (0:00:06.226) 0:00:11.540 ********** 2026-04-17 04:07:31.898157 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-17 04:07:31.898164 | orchestrator | 2026-04-17 04:07:31.898170 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-17 04:07:31.898177 | orchestrator | Friday 17 April 2026 04:07:17 +0000 (0:00:03.593) 0:00:15.133 ********** 2026-04-17 04:07:31.898184 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:07:31.898190 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-17 04:07:31.898196 | orchestrator | 2026-04-17 04:07:31.898203 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-17 04:07:31.898209 | orchestrator | Friday 17 April 2026 04:07:21 +0000 (0:00:04.007) 0:00:19.140 ********** 2026-04-17 04:07:31.898216 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:07:31.898222 | orchestrator | 2026-04-17 04:07:31.898228 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-17 04:07:31.898235 | orchestrator | Friday 17 April 2026 04:07:24 +0000 (0:00:03.053) 0:00:22.194 ********** 2026-04-17 04:07:31.898241 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-17 04:07:31.898247 | orchestrator | 2026-04-17 04:07:31.898253 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 04:07:31.898262 | orchestrator | Friday 17 April 2026 04:07:27 +0000 (0:00:03.583) 0:00:25.778 ********** 2026-04-17 04:07:31.898268 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:07:31.898275 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:07:31.898281 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:07:31.898288 | orchestrator | 2026-04-17 04:07:31.898295 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-17 04:07:31.898302 | orchestrator | Friday 17 April 2026 04:07:27 +0000 (0:00:00.288) 0:00:26.066 ********** 2026-04-17 04:07:31.898312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:31.898340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:31.898353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:31.898367 | orchestrator | 2026-04-17 04:07:31.898374 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-17 04:07:31.898381 | orchestrator | Friday 17 April 2026 04:07:29 +0000 (0:00:01.075) 0:00:27.142 ********** 2026-04-17 04:07:31.898390 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:07:31.898397 | orchestrator | 2026-04-17 04:07:31.898403 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-17 04:07:31.898410 | orchestrator | Friday 17 April 2026 04:07:29 +0000 (0:00:00.335) 0:00:27.477 ********** 2026-04-17 04:07:31.898416 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:07:31.898423 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:07:31.898429 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:07:31.898436 | orchestrator | 2026-04-17 04:07:31.898442 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 04:07:31.898449 | orchestrator | Friday 17 April 2026 04:07:29 +0000 (0:00:00.307) 0:00:27.785 ********** 2026-04-17 04:07:31.898457 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:07:31.898462 | orchestrator | 2026-04-17 04:07:31.898466 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-17 04:07:31.898470 | orchestrator | Friday 17 April 2026 04:07:30 +0000 (0:00:00.544) 0:00:28.330 ********** 2026-04-17 04:07:31.898475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:31.898486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:34.681364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:34.681461 | orchestrator | 2026-04-17 04:07:34.681470 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-17 04:07:34.681478 | orchestrator | Friday 17 April 2026 04:07:31 +0000 (0:00:01.671) 0:00:30.001 ********** 2026-04-17 04:07:34.681485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:34.681492 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:07:34.681499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:34.681505 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:07:34.681511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:34.681536 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:07:34.681542 | orchestrator | 2026-04-17 04:07:34.681548 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-17 04:07:34.681573 | orchestrator | Friday 17 April 2026 04:07:32 +0000 (0:00:00.485) 0:00:30.486 ********** 2026-04-17 04:07:34.681588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:34.681599 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:07:34.681609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:34.681620 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:07:34.681631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:34.681642 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:07:34.681652 | orchestrator | 2026-04-17 04:07:34.681661 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-17 04:07:34.681667 | orchestrator | Friday 17 April 2026 04:07:33 +0000 (0:00:00.716) 0:00:31.203 ********** 2026-04-17 04:07:34.681673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:34.681695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:41.802299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:41.802422 | orchestrator | 2026-04-17 04:07:41.802441 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-17 04:07:41.802455 | orchestrator | Friday 17 April 2026 04:07:34 +0000 (0:00:01.587) 0:00:32.791 ********** 2026-04-17 04:07:41.802468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:41.802479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:41.802520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:07:41.802529 | orchestrator | 2026-04-17 04:07:41.802536 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-17 04:07:41.802544 | orchestrator | Friday 17 April 2026 04:07:37 +0000 (0:00:02.352) 0:00:35.143 ********** 2026-04-17 04:07:41.802567 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-17 04:07:41.802576 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-17 04:07:41.802584 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-17 04:07:41.802591 | orchestrator | 2026-04-17 04:07:41.802598 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-17 04:07:41.802606 | orchestrator | Friday 17 April 2026 04:07:38 +0000 (0:00:01.467) 0:00:36.611 ********** 2026-04-17 04:07:41.802613 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:07:41.802622 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:07:41.802629 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:07:41.802636 | orchestrator | 2026-04-17 04:07:41.802643 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-17 04:07:41.802650 | orchestrator | Friday 17 April 2026 04:07:39 +0000 (0:00:01.343) 0:00:37.954 ********** 2026-04-17 04:07:41.802658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:41.802666 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:07:41.802674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:41.802689 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:07:41.802697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 04:07:41.802704 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:07:41.802712 | orchestrator | 2026-04-17 04:07:41.802719 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-17 04:07:41.802726 | orchestrator | Friday 17 April 2026 04:07:40 +0000 (0:00:00.870) 0:00:38.825 ********** 2026-04-17 04:07:41.802745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:08:09.185589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:08:09.185676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 04:08:09.185697 | orchestrator | 2026-04-17 04:08:09.185702 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-17 04:08:09.185708 | orchestrator | Friday 17 April 2026 04:07:41 +0000 (0:00:01.086) 0:00:39.911 ********** 2026-04-17 04:08:09.185712 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:08:09.185717 | orchestrator | 2026-04-17 04:08:09.185721 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-17 04:08:09.185725 | orchestrator | Friday 17 April 2026 04:07:43 +0000 (0:00:01.970) 0:00:41.882 ********** 2026-04-17 04:08:09.185729 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:08:09.185732 | orchestrator | 2026-04-17 04:08:09.185737 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-17 04:08:09.185741 | orchestrator | Friday 17 April 2026 04:07:45 +0000 (0:00:02.123) 0:00:44.006 ********** 2026-04-17 04:08:09.185744 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:08:09.185748 | orchestrator | 2026-04-17 04:08:09.185752 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 04:08:09.185756 | orchestrator | Friday 17 April 2026 04:07:58 +0000 (0:00:12.755) 0:00:56.761 ********** 2026-04-17 04:08:09.185760 | orchestrator | 2026-04-17 04:08:09.185764 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 04:08:09.185768 | orchestrator | Friday 17 April 2026 04:07:58 +0000 (0:00:00.068) 0:00:56.830 ********** 2026-04-17 04:08:09.185771 | orchestrator | 2026-04-17 04:08:09.185775 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 04:08:09.185779 | orchestrator | Friday 17 April 2026 04:07:58 +0000 (0:00:00.082) 0:00:56.912 ********** 2026-04-17 04:08:09.185783 | orchestrator | 2026-04-17 04:08:09.185786 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-17 04:08:09.185790 | orchestrator | Friday 17 April 2026 04:07:58 +0000 (0:00:00.078) 0:00:56.990 ********** 2026-04-17 04:08:09.185794 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:08:09.185798 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:08:09.185801 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:08:09.185805 | orchestrator | 2026-04-17 04:08:09.185840 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:08:09.185845 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 04:08:09.185850 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:08:09.185860 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:08:09.185863 | orchestrator | 2026-04-17 04:08:09.185867 | orchestrator | 2026-04-17 04:08:09.185871 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:08:09.185875 | orchestrator | Friday 17 April 2026 04:08:08 +0000 (0:00:09.856) 0:01:06.847 ********** 2026-04-17 04:08:09.185879 | orchestrator | =============================================================================== 2026-04-17 04:08:09.185882 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.76s 2026-04-17 04:08:09.185900 | orchestrator | placement : Restart placement-api container ----------------------------- 9.86s 2026-04-17 04:08:09.185904 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.23s 2026-04-17 04:08:09.185908 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.01s 2026-04-17 04:08:09.185912 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.69s 2026-04-17 04:08:09.185916 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.59s 2026-04-17 04:08:09.185919 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.58s 2026-04-17 04:08:09.185923 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.05s 2026-04-17 04:08:09.185927 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.35s 2026-04-17 04:08:09.185931 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.12s 2026-04-17 04:08:09.185934 | orchestrator | placement : Creating placement databases -------------------------------- 1.97s 2026-04-17 04:08:09.185938 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.67s 2026-04-17 04:08:09.185942 | orchestrator | placement : Copying over config.json files for services ----------------- 1.59s 2026-04-17 04:08:09.185946 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.47s 2026-04-17 04:08:09.185950 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.34s 2026-04-17 04:08:09.185953 | orchestrator | placement : Check placement containers ---------------------------------- 1.09s 2026-04-17 04:08:09.185957 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.08s 2026-04-17 04:08:09.185961 | orchestrator | placement : Copying over existing policy file --------------------------- 0.87s 2026-04-17 04:08:09.185964 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.72s 2026-04-17 04:08:09.185968 | orchestrator | placement : include_tasks ----------------------------------------------- 0.57s 2026-04-17 04:08:11.689362 | orchestrator | 2026-04-17 04:08:11 | INFO  | Task 0beb4d1c-6696-4b9c-8054-cb909c804ac2 (neutron) was prepared for execution. 2026-04-17 04:08:11.689468 | orchestrator | 2026-04-17 04:08:11 | INFO  | It takes a moment until task 0beb4d1c-6696-4b9c-8054-cb909c804ac2 (neutron) has been started and output is visible here. 2026-04-17 04:08:57.702466 | orchestrator | 2026-04-17 04:08:57.702578 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:08:57.702591 | orchestrator | 2026-04-17 04:08:57.702599 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:08:57.702607 | orchestrator | Friday 17 April 2026 04:08:15 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-04-17 04:08:57.702614 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:08:57.702622 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:08:57.702628 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:08:57.702635 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:08:57.702642 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:08:57.702648 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:08:57.702655 | orchestrator | 2026-04-17 04:08:57.702662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:08:57.702669 | orchestrator | Friday 17 April 2026 04:08:16 +0000 (0:00:00.633) 0:00:00.870 ********** 2026-04-17 04:08:57.702676 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-17 04:08:57.702683 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-17 04:08:57.702689 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-17 04:08:57.702696 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-17 04:08:57.702703 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-17 04:08:57.702710 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-17 04:08:57.702716 | orchestrator | 2026-04-17 04:08:57.702744 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-17 04:08:57.702751 | orchestrator | 2026-04-17 04:08:57.702758 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 04:08:57.702764 | orchestrator | Friday 17 April 2026 04:08:16 +0000 (0:00:00.547) 0:00:01.418 ********** 2026-04-17 04:08:57.702772 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:08:57.702779 | orchestrator | 2026-04-17 04:08:57.702798 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-17 04:08:57.702805 | orchestrator | Friday 17 April 2026 04:08:18 +0000 (0:00:01.115) 0:00:02.533 ********** 2026-04-17 04:08:57.702812 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:08:57.702818 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:08:57.702882 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:08:57.702895 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:08:57.702906 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:08:57.702918 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:08:57.702929 | orchestrator | 2026-04-17 04:08:57.702935 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-17 04:08:57.702942 | orchestrator | Friday 17 April 2026 04:08:19 +0000 (0:00:01.379) 0:00:03.913 ********** 2026-04-17 04:08:57.702949 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:08:57.702956 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:08:57.702962 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:08:57.702969 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:08:57.702975 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:08:57.702982 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:08:57.702988 | orchestrator | 2026-04-17 04:08:57.702995 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-17 04:08:57.703001 | orchestrator | Friday 17 April 2026 04:08:20 +0000 (0:00:01.059) 0:00:04.973 ********** 2026-04-17 04:08:57.703008 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 04:08:57.703018 | orchestrator |  "changed": false, 2026-04-17 04:08:57.703029 | orchestrator |  "msg": "All assertions passed" 2026-04-17 04:08:57.703041 | orchestrator | } 2026-04-17 04:08:57.703053 | orchestrator | ok: [testbed-node-1] => { 2026-04-17 04:08:57.703063 | orchestrator |  "changed": false, 2026-04-17 04:08:57.703074 | orchestrator |  "msg": "All assertions passed" 2026-04-17 04:08:57.703086 | orchestrator | } 2026-04-17 04:08:57.703097 | orchestrator | ok: [testbed-node-2] => { 2026-04-17 04:08:57.703107 | orchestrator |  "changed": false, 2026-04-17 04:08:57.703118 | orchestrator |  "msg": "All assertions passed" 2026-04-17 04:08:57.703129 | orchestrator | } 2026-04-17 04:08:57.703140 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 04:08:57.703152 | orchestrator |  "changed": false, 2026-04-17 04:08:57.703164 | orchestrator |  "msg": "All assertions passed" 2026-04-17 04:08:57.703176 | orchestrator | } 2026-04-17 04:08:57.703188 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 04:08:57.703200 | orchestrator |  "changed": false, 2026-04-17 04:08:57.703209 | orchestrator |  "msg": "All assertions passed" 2026-04-17 04:08:57.703217 | orchestrator | } 2026-04-17 04:08:57.703224 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 04:08:57.703233 | orchestrator |  "changed": false, 2026-04-17 04:08:57.703240 | orchestrator |  "msg": "All assertions passed" 2026-04-17 04:08:57.703248 | orchestrator | } 2026-04-17 04:08:57.703256 | orchestrator | 2026-04-17 04:08:57.703263 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-17 04:08:57.703271 | orchestrator | Friday 17 April 2026 04:08:21 +0000 (0:00:00.847) 0:00:05.821 ********** 2026-04-17 04:08:57.703279 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:08:57.703287 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:08:57.703295 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:08:57.703302 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:08:57.703310 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:08:57.703329 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:08:57.703337 | orchestrator | 2026-04-17 04:08:57.703345 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-17 04:08:57.703353 | orchestrator | Friday 17 April 2026 04:08:22 +0000 (0:00:00.742) 0:00:06.564 ********** 2026-04-17 04:08:57.703361 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-17 04:08:57.703369 | orchestrator | 2026-04-17 04:08:57.703377 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-17 04:08:57.703384 | orchestrator | Friday 17 April 2026 04:08:25 +0000 (0:00:03.341) 0:00:09.905 ********** 2026-04-17 04:08:57.703392 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-17 04:08:57.703401 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-17 04:08:57.703408 | orchestrator | 2026-04-17 04:08:57.703433 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-17 04:08:57.703441 | orchestrator | Friday 17 April 2026 04:08:31 +0000 (0:00:06.269) 0:00:16.174 ********** 2026-04-17 04:08:57.703448 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:08:57.703455 | orchestrator | 2026-04-17 04:08:57.703462 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-17 04:08:57.703469 | orchestrator | Friday 17 April 2026 04:08:34 +0000 (0:00:02.987) 0:00:19.162 ********** 2026-04-17 04:08:57.703475 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:08:57.703482 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-17 04:08:57.703488 | orchestrator | 2026-04-17 04:08:57.703495 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-17 04:08:57.703502 | orchestrator | Friday 17 April 2026 04:08:38 +0000 (0:00:04.117) 0:00:23.279 ********** 2026-04-17 04:08:57.703508 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:08:57.703515 | orchestrator | 2026-04-17 04:08:57.703522 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-17 04:08:57.703528 | orchestrator | Friday 17 April 2026 04:08:41 +0000 (0:00:03.046) 0:00:26.326 ********** 2026-04-17 04:08:57.703535 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-17 04:08:57.703541 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-17 04:08:57.703548 | orchestrator | 2026-04-17 04:08:57.703555 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 04:08:57.703561 | orchestrator | Friday 17 April 2026 04:08:49 +0000 (0:00:07.499) 0:00:33.826 ********** 2026-04-17 04:08:57.703568 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:08:57.703575 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:08:57.703581 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:08:57.703588 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:08:57.703595 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:08:57.703601 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:08:57.703608 | orchestrator | 2026-04-17 04:08:57.703615 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-17 04:08:57.703627 | orchestrator | Friday 17 April 2026 04:08:49 +0000 (0:00:00.646) 0:00:34.472 ********** 2026-04-17 04:08:57.703634 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:08:57.703641 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:08:57.703648 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:08:57.703654 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:08:57.703661 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:08:57.703668 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:08:57.703674 | orchestrator | 2026-04-17 04:08:57.703681 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-17 04:08:57.703688 | orchestrator | Friday 17 April 2026 04:08:51 +0000 (0:00:01.994) 0:00:36.467 ********** 2026-04-17 04:08:57.703695 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:08:57.703707 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:08:57.703714 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:08:57.703720 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:08:57.703727 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:08:57.703734 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:08:57.703740 | orchestrator | 2026-04-17 04:08:57.703747 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-17 04:08:57.703754 | orchestrator | Friday 17 April 2026 04:08:52 +0000 (0:00:00.981) 0:00:37.449 ********** 2026-04-17 04:08:57.703760 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:08:57.703767 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:08:57.703774 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:08:57.703780 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:08:57.703787 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:08:57.703793 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:08:57.703800 | orchestrator | 2026-04-17 04:08:57.703807 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-17 04:08:57.703813 | orchestrator | Friday 17 April 2026 04:08:55 +0000 (0:00:02.163) 0:00:39.612 ********** 2026-04-17 04:08:57.703837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:08:57.703855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:03.079108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:03.079225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:03.079265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:03.079277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:03.079285 | orchestrator | 2026-04-17 04:09:03.079292 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-17 04:09:03.079300 | orchestrator | Friday 17 April 2026 04:08:57 +0000 (0:00:02.595) 0:00:42.208 ********** 2026-04-17 04:09:03.079305 | orchestrator | [WARNING]: Skipped 2026-04-17 04:09:03.079313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-17 04:09:03.079320 | orchestrator | due to this access issue: 2026-04-17 04:09:03.079328 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-17 04:09:03.079334 | orchestrator | a directory 2026-04-17 04:09:03.079341 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:09:03.079348 | orchestrator | 2026-04-17 04:09:03.079357 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 04:09:03.079364 | orchestrator | Friday 17 April 2026 04:08:58 +0000 (0:00:00.853) 0:00:43.062 ********** 2026-04-17 04:09:03.079370 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:09:03.079378 | orchestrator | 2026-04-17 04:09:03.079383 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-17 04:09:03.079405 | orchestrator | Friday 17 April 2026 04:08:59 +0000 (0:00:01.372) 0:00:44.434 ********** 2026-04-17 04:09:03.079414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:03.079436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:03.079444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:03.079451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:03.079464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:07.786873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:07.787058 | orchestrator | 2026-04-17 04:09:07.787086 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-17 04:09:07.787121 | orchestrator | Friday 17 April 2026 04:09:03 +0000 (0:00:03.149) 0:00:47.583 ********** 2026-04-17 04:09:07.787142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:07.787161 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:07.787182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:07.787193 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:07.787203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:07.787213 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:07.787242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:07.787264 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:07.787280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:07.787290 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:07.787300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:07.787310 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:07.787320 | orchestrator | 2026-04-17 04:09:07.787330 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-17 04:09:07.787339 | orchestrator | Friday 17 April 2026 04:09:04 +0000 (0:00:01.917) 0:00:49.501 ********** 2026-04-17 04:09:07.787350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:07.787362 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:07.787381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:13.070979 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:13.071076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:13.071090 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:13.071112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:13.071120 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:13.071127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:13.071134 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:13.071140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:13.071168 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:13.071174 | orchestrator | 2026-04-17 04:09:13.071182 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-17 04:09:13.071191 | orchestrator | Friday 17 April 2026 04:09:07 +0000 (0:00:02.791) 0:00:52.293 ********** 2026-04-17 04:09:13.071197 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:13.071204 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:13.071210 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:13.071216 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:13.071224 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:13.071228 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:13.071232 | orchestrator | 2026-04-17 04:09:13.071235 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-17 04:09:13.071239 | orchestrator | Friday 17 April 2026 04:09:10 +0000 (0:00:02.319) 0:00:54.612 ********** 2026-04-17 04:09:13.071243 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:13.071247 | orchestrator | 2026-04-17 04:09:13.071250 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-17 04:09:13.071267 | orchestrator | Friday 17 April 2026 04:09:10 +0000 (0:00:00.156) 0:00:54.769 ********** 2026-04-17 04:09:13.071271 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:13.071275 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:13.071278 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:13.071282 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:13.071286 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:13.071290 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:13.071293 | orchestrator | 2026-04-17 04:09:13.071297 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-17 04:09:13.071301 | orchestrator | Friday 17 April 2026 04:09:10 +0000 (0:00:00.643) 0:00:55.412 ********** 2026-04-17 04:09:13.071309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:13.071313 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:13.071317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:13.071321 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:13.071328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:13.071343 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:13.071350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:13.071357 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:13.071368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:21.100152 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:21.100243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:21.100253 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:21.100258 | orchestrator | 2026-04-17 04:09:21.100264 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-17 04:09:21.100270 | orchestrator | Friday 17 April 2026 04:09:13 +0000 (0:00:02.155) 0:00:57.568 ********** 2026-04-17 04:09:21.100275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:21.100295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:21.100300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:21.100319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:21.100324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:21.100329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:21.100338 | orchestrator | 2026-04-17 04:09:21.100342 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-17 04:09:21.100347 | orchestrator | Friday 17 April 2026 04:09:15 +0000 (0:00:02.840) 0:01:00.409 ********** 2026-04-17 04:09:21.100351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:21.100356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:21.100368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:25.628350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:25.628469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:25.628483 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:09:25.628494 | orchestrator | 2026-04-17 04:09:25.628504 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-17 04:09:25.628514 | orchestrator | Friday 17 April 2026 04:09:21 +0000 (0:00:05.195) 0:01:05.604 ********** 2026-04-17 04:09:25.628524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:25.628534 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:25.628575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:25.628592 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:25.628602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:25.628611 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:25.628620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:25.628629 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:25.628638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:25.628647 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:25.628660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:25.628670 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:25.628678 | orchestrator | 2026-04-17 04:09:25.628687 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-17 04:09:25.628696 | orchestrator | Friday 17 April 2026 04:09:23 +0000 (0:00:01.940) 0:01:07.545 ********** 2026-04-17 04:09:25.628705 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:25.628720 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:25.628728 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:25.628737 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:09:25.628746 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:09:25.628754 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:09:25.628763 | orchestrator | 2026-04-17 04:09:25.628772 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-17 04:09:25.628787 | orchestrator | Friday 17 April 2026 04:09:25 +0000 (0:00:02.582) 0:01:10.127 ********** 2026-04-17 04:09:44.391973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:44.392074 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:44.392087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:44.392095 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:44.392102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:44.392109 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:44.392117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:44.392187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:44.392202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:09:44.392212 | orchestrator | 2026-04-17 04:09:44.392222 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-17 04:09:44.392233 | orchestrator | Friday 17 April 2026 04:09:28 +0000 (0:00:03.296) 0:01:13.424 ********** 2026-04-17 04:09:44.392242 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:44.392251 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:44.392261 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:44.392270 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:44.392279 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:44.392290 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:44.392300 | orchestrator | 2026-04-17 04:09:44.392310 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-17 04:09:44.392321 | orchestrator | Friday 17 April 2026 04:09:31 +0000 (0:00:02.159) 0:01:15.584 ********** 2026-04-17 04:09:44.392331 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:44.392345 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:44.392368 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:44.392378 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:44.392387 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:44.392397 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:44.392406 | orchestrator | 2026-04-17 04:09:44.392416 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-17 04:09:44.392426 | orchestrator | Friday 17 April 2026 04:09:33 +0000 (0:00:02.089) 0:01:17.673 ********** 2026-04-17 04:09:44.392436 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:44.392447 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:44.392457 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:44.392468 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:44.392479 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:44.392489 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:44.392497 | orchestrator | 2026-04-17 04:09:44.392505 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-17 04:09:44.392512 | orchestrator | Friday 17 April 2026 04:09:35 +0000 (0:00:02.343) 0:01:20.017 ********** 2026-04-17 04:09:44.392529 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:44.392535 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:44.392541 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:44.392548 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:44.392554 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:44.392560 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:44.392566 | orchestrator | 2026-04-17 04:09:44.392572 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-17 04:09:44.392578 | orchestrator | Friday 17 April 2026 04:09:37 +0000 (0:00:02.249) 0:01:22.267 ********** 2026-04-17 04:09:44.392584 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:44.392591 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:44.392597 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:44.392603 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:44.392609 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:44.392615 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:44.392622 | orchestrator | 2026-04-17 04:09:44.392628 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-17 04:09:44.392634 | orchestrator | Friday 17 April 2026 04:09:39 +0000 (0:00:02.248) 0:01:24.515 ********** 2026-04-17 04:09:44.392641 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:44.392647 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:44.392653 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:44.392659 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:44.392665 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:44.392671 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:44.392677 | orchestrator | 2026-04-17 04:09:44.392689 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-17 04:09:44.392696 | orchestrator | Friday 17 April 2026 04:09:42 +0000 (0:00:02.224) 0:01:26.740 ********** 2026-04-17 04:09:44.392702 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 04:09:44.392708 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:44.392715 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 04:09:44.392721 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:44.392727 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 04:09:44.392733 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:44.392740 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 04:09:44.392746 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:44.392760 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 04:09:47.899806 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:47.900064 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 04:09:47.900085 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:47.900097 | orchestrator | 2026-04-17 04:09:47.900110 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-17 04:09:47.900121 | orchestrator | Friday 17 April 2026 04:09:44 +0000 (0:00:02.151) 0:01:28.891 ********** 2026-04-17 04:09:47.900983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:47.901094 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:47.901108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:47.901118 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:47.901127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:47.901135 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:09:47.901157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:47.901167 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:09:47.901199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:47.901213 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:09:47.901227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:09:47.901252 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:09:47.901266 | orchestrator | 2026-04-17 04:09:47.901279 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-17 04:09:47.901293 | orchestrator | Friday 17 April 2026 04:09:46 +0000 (0:00:01.731) 0:01:30.622 ********** 2026-04-17 04:09:47.901307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:47.901320 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:09:47.901340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:09:47.901355 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:09:47.901381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:10:13.310919 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:10:13.311086 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:10:13.311111 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:10:13.311134 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311141 | orchestrator | 2026-04-17 04:10:13.311148 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-17 04:10:13.311155 | orchestrator | Friday 17 April 2026 04:09:47 +0000 (0:00:01.779) 0:01:32.402 ********** 2026-04-17 04:10:13.311162 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311168 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311174 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311180 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311186 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311193 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311199 | orchestrator | 2026-04-17 04:10:13.311206 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-17 04:10:13.311224 | orchestrator | Friday 17 April 2026 04:09:49 +0000 (0:00:01.785) 0:01:34.187 ********** 2026-04-17 04:10:13.311231 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311237 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311243 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311249 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:10:13.311255 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:10:13.311261 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:10:13.311267 | orchestrator | 2026-04-17 04:10:13.311273 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-17 04:10:13.311280 | orchestrator | Friday 17 April 2026 04:09:53 +0000 (0:00:03.630) 0:01:37.818 ********** 2026-04-17 04:10:13.311294 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311300 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311306 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311312 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311318 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311324 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311330 | orchestrator | 2026-04-17 04:10:13.311337 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-17 04:10:13.311343 | orchestrator | Friday 17 April 2026 04:09:55 +0000 (0:00:02.030) 0:01:39.848 ********** 2026-04-17 04:10:13.311349 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311355 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311361 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311367 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311373 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311379 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311385 | orchestrator | 2026-04-17 04:10:13.311392 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-17 04:10:13.311412 | orchestrator | Friday 17 April 2026 04:09:57 +0000 (0:00:02.242) 0:01:42.091 ********** 2026-04-17 04:10:13.311419 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311425 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311431 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311437 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311444 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311450 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311457 | orchestrator | 2026-04-17 04:10:13.311464 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-17 04:10:13.311471 | orchestrator | Friday 17 April 2026 04:09:59 +0000 (0:00:02.224) 0:01:44.316 ********** 2026-04-17 04:10:13.311478 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311485 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311492 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311500 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311508 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311516 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311525 | orchestrator | 2026-04-17 04:10:13.311533 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-17 04:10:13.311542 | orchestrator | Friday 17 April 2026 04:10:01 +0000 (0:00:02.160) 0:01:46.476 ********** 2026-04-17 04:10:13.311550 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311558 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311566 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311575 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311583 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311591 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311599 | orchestrator | 2026-04-17 04:10:13.311607 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-17 04:10:13.311615 | orchestrator | Friday 17 April 2026 04:10:04 +0000 (0:00:02.291) 0:01:48.767 ********** 2026-04-17 04:10:13.311623 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311632 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311639 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311660 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311668 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311676 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311684 | orchestrator | 2026-04-17 04:10:13.311692 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-17 04:10:13.311700 | orchestrator | Friday 17 April 2026 04:10:06 +0000 (0:00:02.214) 0:01:50.982 ********** 2026-04-17 04:10:13.311708 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311716 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311724 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311737 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311745 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311753 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311762 | orchestrator | 2026-04-17 04:10:13.311770 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-17 04:10:13.311778 | orchestrator | Friday 17 April 2026 04:10:08 +0000 (0:00:02.471) 0:01:53.453 ********** 2026-04-17 04:10:13.311787 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 04:10:13.311796 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:13.311804 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 04:10:13.311812 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311820 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 04:10:13.311849 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:13.311856 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 04:10:13.311864 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:13.311871 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 04:10:13.311878 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:13.311885 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 04:10:13.311893 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:13.311900 | orchestrator | 2026-04-17 04:10:13.311907 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-17 04:10:13.311919 | orchestrator | Friday 17 April 2026 04:10:10 +0000 (0:00:01.935) 0:01:55.389 ********** 2026-04-17 04:10:13.311927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:10:13.311936 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:10:13.311950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:10:15.802207 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:10:15.802319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 04:10:15.802365 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:10:15.802380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:10:15.802393 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:10:15.802418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:10:15.802430 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:10:15.802442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 04:10:15.802453 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:10:15.802464 | orchestrator | 2026-04-17 04:10:15.802476 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-17 04:10:15.802488 | orchestrator | Friday 17 April 2026 04:10:13 +0000 (0:00:02.423) 0:01:57.813 ********** 2026-04-17 04:10:15.802519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:10:15.802543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:10:15.802559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 04:10:15.802571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:10:15.802583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:10:15.802602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 04:12:31.587509 | orchestrator | 2026-04-17 04:12:31.587631 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 04:12:31.587649 | orchestrator | Friday 17 April 2026 04:10:15 +0000 (0:00:02.490) 0:02:00.303 ********** 2026-04-17 04:12:31.587661 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:12:31.587675 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:12:31.587686 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:12:31.587697 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:12:31.587708 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:12:31.587719 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:12:31.587730 | orchestrator | 2026-04-17 04:12:31.587741 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-17 04:12:31.587752 | orchestrator | Friday 17 April 2026 04:10:16 +0000 (0:00:00.860) 0:02:01.164 ********** 2026-04-17 04:12:31.587763 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:12:31.587773 | orchestrator | 2026-04-17 04:12:31.587784 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-17 04:12:31.587795 | orchestrator | Friday 17 April 2026 04:10:18 +0000 (0:00:01.984) 0:02:03.149 ********** 2026-04-17 04:12:31.587806 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:12:31.587817 | orchestrator | 2026-04-17 04:12:31.587828 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-17 04:12:31.587986 | orchestrator | Friday 17 April 2026 04:10:20 +0000 (0:00:02.173) 0:02:05.322 ********** 2026-04-17 04:12:31.588005 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:12:31.588024 | orchestrator | 2026-04-17 04:12:31.588042 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 04:12:31.588060 | orchestrator | Friday 17 April 2026 04:11:00 +0000 (0:00:39.998) 0:02:45.321 ********** 2026-04-17 04:12:31.588079 | orchestrator | 2026-04-17 04:12:31.588099 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 04:12:31.588116 | orchestrator | Friday 17 April 2026 04:11:00 +0000 (0:00:00.087) 0:02:45.409 ********** 2026-04-17 04:12:31.588133 | orchestrator | 2026-04-17 04:12:31.588152 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 04:12:31.588171 | orchestrator | Friday 17 April 2026 04:11:00 +0000 (0:00:00.070) 0:02:45.480 ********** 2026-04-17 04:12:31.588190 | orchestrator | 2026-04-17 04:12:31.588210 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 04:12:31.588230 | orchestrator | Friday 17 April 2026 04:11:01 +0000 (0:00:00.082) 0:02:45.563 ********** 2026-04-17 04:12:31.588250 | orchestrator | 2026-04-17 04:12:31.588269 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 04:12:31.588306 | orchestrator | Friday 17 April 2026 04:11:01 +0000 (0:00:00.068) 0:02:45.631 ********** 2026-04-17 04:12:31.588320 | orchestrator | 2026-04-17 04:12:31.588339 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 04:12:31.588364 | orchestrator | Friday 17 April 2026 04:11:01 +0000 (0:00:00.070) 0:02:45.702 ********** 2026-04-17 04:12:31.588390 | orchestrator | 2026-04-17 04:12:31.588407 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-17 04:12:31.588424 | orchestrator | Friday 17 April 2026 04:11:01 +0000 (0:00:00.071) 0:02:45.773 ********** 2026-04-17 04:12:31.588442 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:12:31.588491 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:12:31.588507 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:12:31.588525 | orchestrator | 2026-04-17 04:12:31.588542 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-17 04:12:31.588561 | orchestrator | Friday 17 April 2026 04:11:29 +0000 (0:00:28.391) 0:03:14.164 ********** 2026-04-17 04:12:31.588579 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:12:31.588597 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:12:31.588613 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:12:31.588632 | orchestrator | 2026-04-17 04:12:31.588645 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:12:31.588657 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 04:12:31.588669 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-17 04:12:31.588680 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-17 04:12:31.588691 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 04:12:31.588702 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 04:12:31.588713 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 04:12:31.588724 | orchestrator | 2026-04-17 04:12:31.588734 | orchestrator | 2026-04-17 04:12:31.588745 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:12:31.588756 | orchestrator | Friday 17 April 2026 04:12:31 +0000 (0:01:01.378) 0:04:15.543 ********** 2026-04-17 04:12:31.588767 | orchestrator | =============================================================================== 2026-04-17 04:12:31.588777 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 61.38s 2026-04-17 04:12:31.588788 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.00s 2026-04-17 04:12:31.588799 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.39s 2026-04-17 04:12:31.588831 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.50s 2026-04-17 04:12:31.588876 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.27s 2026-04-17 04:12:31.588887 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.20s 2026-04-17 04:12:31.588897 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.12s 2026-04-17 04:12:31.588908 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.63s 2026-04-17 04:12:31.588919 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.34s 2026-04-17 04:12:31.588930 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.30s 2026-04-17 04:12:31.588941 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.15s 2026-04-17 04:12:31.588952 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.05s 2026-04-17 04:12:31.588962 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 2.99s 2026-04-17 04:12:31.588973 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.84s 2026-04-17 04:12:31.588984 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.79s 2026-04-17 04:12:31.588995 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.60s 2026-04-17 04:12:31.589006 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.58s 2026-04-17 04:12:31.589028 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.49s 2026-04-17 04:12:31.589039 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 2.47s 2026-04-17 04:12:31.589049 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.42s 2026-04-17 04:12:34.532813 | orchestrator | 2026-04-17 04:12:34 | INFO  | Task 69ec0df8-1bc2-48ca-b9b8-b042d08e89bf (nova) was prepared for execution. 2026-04-17 04:12:34.532954 | orchestrator | 2026-04-17 04:12:34 | INFO  | It takes a moment until task 69ec0df8-1bc2-48ca-b9b8-b042d08e89bf (nova) has been started and output is visible here. 2026-04-17 04:14:26.948407 | orchestrator | 2026-04-17 04:14:26.948493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:14:26.948504 | orchestrator | 2026-04-17 04:14:26.948511 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-17 04:14:26.948531 | orchestrator | Friday 17 April 2026 04:12:38 +0000 (0:00:00.270) 0:00:00.270 ********** 2026-04-17 04:14:26.948538 | orchestrator | changed: [testbed-manager] 2026-04-17 04:14:26.948546 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.948552 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:14:26.948559 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:14:26.948565 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:14:26.948571 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:14:26.948577 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:14:26.948584 | orchestrator | 2026-04-17 04:14:26.948590 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:14:26.948596 | orchestrator | Friday 17 April 2026 04:12:39 +0000 (0:00:00.755) 0:00:01.026 ********** 2026-04-17 04:14:26.948603 | orchestrator | changed: [testbed-manager] 2026-04-17 04:14:26.948609 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.948615 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:14:26.948621 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:14:26.948627 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:14:26.948633 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:14:26.948639 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:14:26.948645 | orchestrator | 2026-04-17 04:14:26.948652 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:14:26.948658 | orchestrator | Friday 17 April 2026 04:12:40 +0000 (0:00:00.857) 0:00:01.884 ********** 2026-04-17 04:14:26.948664 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-17 04:14:26.948671 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-17 04:14:26.948677 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-17 04:14:26.948683 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-17 04:14:26.948690 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-17 04:14:26.948696 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-17 04:14:26.948702 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-17 04:14:26.948708 | orchestrator | 2026-04-17 04:14:26.948714 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-17 04:14:26.948720 | orchestrator | 2026-04-17 04:14:26.948727 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-17 04:14:26.948733 | orchestrator | Friday 17 April 2026 04:12:40 +0000 (0:00:00.772) 0:00:02.656 ********** 2026-04-17 04:14:26.948739 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:14:26.948745 | orchestrator | 2026-04-17 04:14:26.948751 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-17 04:14:26.948757 | orchestrator | Friday 17 April 2026 04:12:41 +0000 (0:00:00.752) 0:00:03.408 ********** 2026-04-17 04:14:26.948765 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-17 04:14:26.948772 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-17 04:14:26.948797 | orchestrator | 2026-04-17 04:14:26.948803 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-17 04:14:26.948810 | orchestrator | Friday 17 April 2026 04:12:45 +0000 (0:00:03.935) 0:00:07.343 ********** 2026-04-17 04:14:26.948825 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 04:14:26.948832 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 04:14:26.948838 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.948844 | orchestrator | 2026-04-17 04:14:26.948878 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-17 04:14:26.948889 | orchestrator | Friday 17 April 2026 04:12:49 +0000 (0:00:04.097) 0:00:11.440 ********** 2026-04-17 04:14:26.948899 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.948910 | orchestrator | 2026-04-17 04:14:26.948921 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-17 04:14:26.948928 | orchestrator | Friday 17 April 2026 04:12:50 +0000 (0:00:00.655) 0:00:12.095 ********** 2026-04-17 04:14:26.948934 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.948940 | orchestrator | 2026-04-17 04:14:26.948946 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-17 04:14:26.948952 | orchestrator | Friday 17 April 2026 04:12:51 +0000 (0:00:01.233) 0:00:13.329 ********** 2026-04-17 04:14:26.948959 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.948966 | orchestrator | 2026-04-17 04:14:26.948973 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 04:14:26.948981 | orchestrator | Friday 17 April 2026 04:12:54 +0000 (0:00:02.692) 0:00:16.022 ********** 2026-04-17 04:14:26.948987 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:14:26.948995 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949002 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949009 | orchestrator | 2026-04-17 04:14:26.949016 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-17 04:14:26.949023 | orchestrator | Friday 17 April 2026 04:12:54 +0000 (0:00:00.324) 0:00:16.346 ********** 2026-04-17 04:14:26.949031 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:14:26.949038 | orchestrator | 2026-04-17 04:14:26.949045 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-17 04:14:26.949051 | orchestrator | Friday 17 April 2026 04:13:24 +0000 (0:00:30.342) 0:00:46.688 ********** 2026-04-17 04:14:26.949058 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.949066 | orchestrator | 2026-04-17 04:14:26.949073 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 04:14:26.949080 | orchestrator | Friday 17 April 2026 04:13:38 +0000 (0:00:13.670) 0:01:00.359 ********** 2026-04-17 04:14:26.949087 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:14:26.949094 | orchestrator | 2026-04-17 04:14:26.949100 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 04:14:26.949108 | orchestrator | Friday 17 April 2026 04:13:50 +0000 (0:00:11.707) 0:01:12.067 ********** 2026-04-17 04:14:26.949129 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:14:26.949136 | orchestrator | 2026-04-17 04:14:26.949143 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-17 04:14:26.949150 | orchestrator | Friday 17 April 2026 04:13:51 +0000 (0:00:00.679) 0:01:12.746 ********** 2026-04-17 04:14:26.949162 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:14:26.949169 | orchestrator | 2026-04-17 04:14:26.949175 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 04:14:26.949182 | orchestrator | Friday 17 April 2026 04:13:51 +0000 (0:00:00.493) 0:01:13.240 ********** 2026-04-17 04:14:26.949188 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:14:26.949194 | orchestrator | 2026-04-17 04:14:26.949201 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-17 04:14:26.949207 | orchestrator | Friday 17 April 2026 04:13:52 +0000 (0:00:00.712) 0:01:13.952 ********** 2026-04-17 04:14:26.949220 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:14:26.949227 | orchestrator | 2026-04-17 04:14:26.949233 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-17 04:14:26.949239 | orchestrator | Friday 17 April 2026 04:14:09 +0000 (0:00:16.909) 0:01:30.862 ********** 2026-04-17 04:14:26.949245 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:14:26.949251 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949257 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949264 | orchestrator | 2026-04-17 04:14:26.949270 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-17 04:14:26.949276 | orchestrator | 2026-04-17 04:14:26.949282 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-17 04:14:26.949288 | orchestrator | Friday 17 April 2026 04:14:09 +0000 (0:00:00.310) 0:01:31.172 ********** 2026-04-17 04:14:26.949294 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:14:26.949300 | orchestrator | 2026-04-17 04:14:26.949307 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-17 04:14:26.949313 | orchestrator | Friday 17 April 2026 04:14:10 +0000 (0:00:00.804) 0:01:31.977 ********** 2026-04-17 04:14:26.949319 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949325 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949331 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.949337 | orchestrator | 2026-04-17 04:14:26.949343 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-17 04:14:26.949350 | orchestrator | Friday 17 April 2026 04:14:12 +0000 (0:00:01.892) 0:01:33.869 ********** 2026-04-17 04:14:26.949356 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949362 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949368 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.949374 | orchestrator | 2026-04-17 04:14:26.949380 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-17 04:14:26.949386 | orchestrator | Friday 17 April 2026 04:14:14 +0000 (0:00:01.953) 0:01:35.822 ********** 2026-04-17 04:14:26.949392 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:14:26.949398 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949405 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949411 | orchestrator | 2026-04-17 04:14:26.949417 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-17 04:14:26.949423 | orchestrator | Friday 17 April 2026 04:14:14 +0000 (0:00:00.531) 0:01:36.354 ********** 2026-04-17 04:14:26.949429 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 04:14:26.949435 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949441 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 04:14:26.949447 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949454 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 04:14:26.949460 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-17 04:14:26.949466 | orchestrator | 2026-04-17 04:14:26.949472 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-17 04:14:26.949478 | orchestrator | Friday 17 April 2026 04:14:21 +0000 (0:00:06.950) 0:01:43.304 ********** 2026-04-17 04:14:26.949484 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:14:26.949491 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949497 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949503 | orchestrator | 2026-04-17 04:14:26.949509 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-17 04:14:26.949515 | orchestrator | Friday 17 April 2026 04:14:21 +0000 (0:00:00.371) 0:01:43.676 ********** 2026-04-17 04:14:26.949521 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 04:14:26.949527 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:14:26.949534 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 04:14:26.949540 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949546 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 04:14:26.949557 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949563 | orchestrator | 2026-04-17 04:14:26.949569 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-17 04:14:26.949575 | orchestrator | Friday 17 April 2026 04:14:23 +0000 (0:00:01.201) 0:01:44.877 ********** 2026-04-17 04:14:26.949581 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949588 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949594 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.949600 | orchestrator | 2026-04-17 04:14:26.949606 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-17 04:14:26.949612 | orchestrator | Friday 17 April 2026 04:14:23 +0000 (0:00:00.469) 0:01:45.347 ********** 2026-04-17 04:14:26.949618 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949625 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949631 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:14:26.949637 | orchestrator | 2026-04-17 04:14:26.949643 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-17 04:14:26.949649 | orchestrator | Friday 17 April 2026 04:14:24 +0000 (0:00:00.924) 0:01:46.271 ********** 2026-04-17 04:14:26.949655 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:14:26.949662 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:14:26.949672 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:15:39.107733 | orchestrator | 2026-04-17 04:15:39.107852 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-17 04:15:39.107922 | orchestrator | Friday 17 April 2026 04:14:26 +0000 (0:00:02.381) 0:01:48.652 ********** 2026-04-17 04:15:39.107937 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:39.107946 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:39.107954 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:15:39.107963 | orchestrator | 2026-04-17 04:15:39.107971 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 04:15:39.107979 | orchestrator | Friday 17 April 2026 04:14:46 +0000 (0:00:19.877) 0:02:08.530 ********** 2026-04-17 04:15:39.107986 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:39.107993 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:39.108001 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:15:39.108008 | orchestrator | 2026-04-17 04:15:39.108015 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 04:15:39.108023 | orchestrator | Friday 17 April 2026 04:14:58 +0000 (0:00:11.232) 0:02:19.763 ********** 2026-04-17 04:15:39.108030 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:15:39.108037 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:39.108044 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:39.108052 | orchestrator | 2026-04-17 04:15:39.108059 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-17 04:15:39.108066 | orchestrator | Friday 17 April 2026 04:14:58 +0000 (0:00:00.895) 0:02:20.659 ********** 2026-04-17 04:15:39.108074 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:39.108081 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:39.108089 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:15:39.108096 | orchestrator | 2026-04-17 04:15:39.108104 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-17 04:15:39.108111 | orchestrator | Friday 17 April 2026 04:15:09 +0000 (0:00:10.503) 0:02:31.162 ********** 2026-04-17 04:15:39.108118 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:39.108125 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:39.108133 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:39.108140 | orchestrator | 2026-04-17 04:15:39.108147 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-17 04:15:39.108154 | orchestrator | Friday 17 April 2026 04:15:10 +0000 (0:00:00.956) 0:02:32.118 ********** 2026-04-17 04:15:39.108162 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:39.108169 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:39.108197 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:39.108205 | orchestrator | 2026-04-17 04:15:39.108212 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-17 04:15:39.108219 | orchestrator | 2026-04-17 04:15:39.108226 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 04:15:39.108233 | orchestrator | Friday 17 April 2026 04:15:10 +0000 (0:00:00.283) 0:02:32.402 ********** 2026-04-17 04:15:39.108240 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:15:39.108248 | orchestrator | 2026-04-17 04:15:39.108256 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-17 04:15:39.108265 | orchestrator | Friday 17 April 2026 04:15:11 +0000 (0:00:00.634) 0:02:33.036 ********** 2026-04-17 04:15:39.108274 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-17 04:15:39.108282 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-17 04:15:39.108291 | orchestrator | 2026-04-17 04:15:39.108299 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-17 04:15:39.108307 | orchestrator | Friday 17 April 2026 04:15:14 +0000 (0:00:03.116) 0:02:36.153 ********** 2026-04-17 04:15:39.108316 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-17 04:15:39.108363 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-17 04:15:39.108372 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-17 04:15:39.108381 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-17 04:15:39.108390 | orchestrator | 2026-04-17 04:15:39.108398 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-17 04:15:39.108407 | orchestrator | Friday 17 April 2026 04:15:20 +0000 (0:00:06.205) 0:02:42.358 ********** 2026-04-17 04:15:39.108415 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:15:39.108424 | orchestrator | 2026-04-17 04:15:39.108432 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-17 04:15:39.108440 | orchestrator | Friday 17 April 2026 04:15:23 +0000 (0:00:03.099) 0:02:45.457 ********** 2026-04-17 04:15:39.108448 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:15:39.108457 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-17 04:15:39.108465 | orchestrator | 2026-04-17 04:15:39.108474 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-17 04:15:39.108482 | orchestrator | Friday 17 April 2026 04:15:27 +0000 (0:00:03.780) 0:02:49.238 ********** 2026-04-17 04:15:39.108490 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:15:39.108499 | orchestrator | 2026-04-17 04:15:39.108507 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-17 04:15:39.108515 | orchestrator | Friday 17 April 2026 04:15:30 +0000 (0:00:03.027) 0:02:52.266 ********** 2026-04-17 04:15:39.108535 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-17 04:15:39.108543 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-17 04:15:39.108551 | orchestrator | 2026-04-17 04:15:39.108560 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-17 04:15:39.108587 | orchestrator | Friday 17 April 2026 04:15:37 +0000 (0:00:07.252) 0:02:59.518 ********** 2026-04-17 04:15:39.108609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:39.108641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:39.108651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:39.108671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:43.662522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:43.662643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:43.662657 | orchestrator | 2026-04-17 04:15:43.662666 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-17 04:15:43.662675 | orchestrator | Friday 17 April 2026 04:15:39 +0000 (0:00:01.294) 0:03:00.813 ********** 2026-04-17 04:15:43.662687 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:43.662699 | orchestrator | 2026-04-17 04:15:43.662711 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-17 04:15:43.662722 | orchestrator | Friday 17 April 2026 04:15:39 +0000 (0:00:00.137) 0:03:00.950 ********** 2026-04-17 04:15:43.662733 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:43.662744 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:43.662756 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:43.662768 | orchestrator | 2026-04-17 04:15:43.662777 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-17 04:15:43.662783 | orchestrator | Friday 17 April 2026 04:15:39 +0000 (0:00:00.311) 0:03:01.262 ********** 2026-04-17 04:15:43.662790 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:15:43.662797 | orchestrator | 2026-04-17 04:15:43.662803 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-17 04:15:43.662810 | orchestrator | Friday 17 April 2026 04:15:40 +0000 (0:00:00.714) 0:03:01.976 ********** 2026-04-17 04:15:43.662817 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:43.662823 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:43.662830 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:43.662836 | orchestrator | 2026-04-17 04:15:43.662843 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 04:15:43.662850 | orchestrator | Friday 17 April 2026 04:15:40 +0000 (0:00:00.530) 0:03:02.507 ********** 2026-04-17 04:15:43.662925 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:15:43.662942 | orchestrator | 2026-04-17 04:15:43.662949 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-17 04:15:43.662957 | orchestrator | Friday 17 April 2026 04:15:41 +0000 (0:00:00.550) 0:03:03.057 ********** 2026-04-17 04:15:43.662967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:43.663014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:43.663025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:43.663032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:43.663043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:43.663068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:43.663080 | orchestrator | 2026-04-17 04:15:43.663099 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-17 04:15:45.340066 | orchestrator | Friday 17 April 2026 04:15:43 +0000 (0:00:02.313) 0:03:05.371 ********** 2026-04-17 04:15:45.340169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:45.340182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:45.340190 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:45.340199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:45.340227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:45.340234 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:45.340272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:45.340280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:45.340287 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:45.340293 | orchestrator | 2026-04-17 04:15:45.340300 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-17 04:15:45.340307 | orchestrator | Friday 17 April 2026 04:15:44 +0000 (0:00:00.874) 0:03:06.245 ********** 2026-04-17 04:15:45.340313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:45.340326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:45.340332 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:45.340347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:47.640497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:47.640599 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:47.640622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:47.640664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:47.640676 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:47.640687 | orchestrator | 2026-04-17 04:15:47.640700 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-17 04:15:47.640714 | orchestrator | Friday 17 April 2026 04:15:45 +0000 (0:00:00.801) 0:03:07.047 ********** 2026-04-17 04:15:47.640743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:47.640788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:47.640805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:47.640822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:47.640835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:47.640847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:54.000602 | orchestrator | 2026-04-17 04:15:54.000710 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-17 04:15:54.000728 | orchestrator | Friday 17 April 2026 04:15:47 +0000 (0:00:02.299) 0:03:09.347 ********** 2026-04-17 04:15:54.000790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:54.000838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:54.000930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:54.000966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:54.000977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:54.000984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:54.000999 | orchestrator | 2026-04-17 04:15:54.001006 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-17 04:15:54.001014 | orchestrator | Friday 17 April 2026 04:15:53 +0000 (0:00:05.759) 0:03:15.106 ********** 2026-04-17 04:15:54.001021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:54.001041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:54.001049 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:54.001067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:58.335851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:58.335959 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:58.335973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 04:15:58.335996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:15:58.336001 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:58.336006 | orchestrator | 2026-04-17 04:15:58.336011 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-17 04:15:58.336016 | orchestrator | Friday 17 April 2026 04:15:53 +0000 (0:00:00.606) 0:03:15.712 ********** 2026-04-17 04:15:58.336020 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:15:58.336024 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:15:58.336028 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:15:58.336031 | orchestrator | 2026-04-17 04:15:58.336035 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-17 04:15:58.336039 | orchestrator | Friday 17 April 2026 04:15:55 +0000 (0:00:01.548) 0:03:17.260 ********** 2026-04-17 04:15:58.336043 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:15:58.336047 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:15:58.336050 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:15:58.336054 | orchestrator | 2026-04-17 04:15:58.336058 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-17 04:15:58.336062 | orchestrator | Friday 17 April 2026 04:15:55 +0000 (0:00:00.343) 0:03:17.604 ********** 2026-04-17 04:15:58.336080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:58.336101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:58.336110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 04:15:58.336114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:58.336123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:15:58.336131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:36.127597 | orchestrator | 2026-04-17 04:16:36.127723 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 04:16:36.127744 | orchestrator | Friday 17 April 2026 04:15:57 +0000 (0:00:02.004) 0:03:19.608 ********** 2026-04-17 04:16:36.127756 | orchestrator | 2026-04-17 04:16:36.127768 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 04:16:36.127776 | orchestrator | Friday 17 April 2026 04:15:58 +0000 (0:00:00.143) 0:03:19.751 ********** 2026-04-17 04:16:36.127782 | orchestrator | 2026-04-17 04:16:36.127803 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 04:16:36.127816 | orchestrator | Friday 17 April 2026 04:15:58 +0000 (0:00:00.142) 0:03:19.894 ********** 2026-04-17 04:16:36.127823 | orchestrator | 2026-04-17 04:16:36.127829 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-17 04:16:36.127836 | orchestrator | Friday 17 April 2026 04:15:58 +0000 (0:00:00.142) 0:03:20.036 ********** 2026-04-17 04:16:36.127842 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:16:36.127850 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:16:36.127856 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:16:36.127862 | orchestrator | 2026-04-17 04:16:36.127957 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-17 04:16:36.127964 | orchestrator | Friday 17 April 2026 04:16:19 +0000 (0:00:21.522) 0:03:41.559 ********** 2026-04-17 04:16:36.127971 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:16:36.127977 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:16:36.127983 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:16:36.127990 | orchestrator | 2026-04-17 04:16:36.127996 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-17 04:16:36.128002 | orchestrator | 2026-04-17 04:16:36.128009 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 04:16:36.128015 | orchestrator | Friday 17 April 2026 04:16:24 +0000 (0:00:04.993) 0:03:46.552 ********** 2026-04-17 04:16:36.128023 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:16:36.128030 | orchestrator | 2026-04-17 04:16:36.128036 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 04:16:36.128043 | orchestrator | Friday 17 April 2026 04:16:25 +0000 (0:00:01.081) 0:03:47.634 ********** 2026-04-17 04:16:36.128049 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:16:36.128055 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:16:36.128076 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:16:36.128083 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:16:36.128089 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:16:36.128115 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:16:36.128123 | orchestrator | 2026-04-17 04:16:36.128130 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-17 04:16:36.128137 | orchestrator | Friday 17 April 2026 04:16:26 +0000 (0:00:00.801) 0:03:48.435 ********** 2026-04-17 04:16:36.128144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:16:36.128151 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:16:36.128158 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:16:36.128165 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:16:36.128172 | orchestrator | 2026-04-17 04:16:36.128179 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 04:16:36.128187 | orchestrator | Friday 17 April 2026 04:16:27 +0000 (0:00:00.889) 0:03:49.325 ********** 2026-04-17 04:16:36.128194 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-17 04:16:36.128206 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-17 04:16:36.128216 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-17 04:16:36.128230 | orchestrator | 2026-04-17 04:16:36.128244 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 04:16:36.128254 | orchestrator | Friday 17 April 2026 04:16:28 +0000 (0:00:00.854) 0:03:50.179 ********** 2026-04-17 04:16:36.128265 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-17 04:16:36.128275 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-17 04:16:36.128285 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-17 04:16:36.128294 | orchestrator | 2026-04-17 04:16:36.128303 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 04:16:36.128313 | orchestrator | Friday 17 April 2026 04:16:29 +0000 (0:00:01.202) 0:03:51.381 ********** 2026-04-17 04:16:36.128323 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-17 04:16:36.128332 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:16:36.128342 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-17 04:16:36.128352 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:16:36.128362 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-17 04:16:36.128372 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:16:36.128382 | orchestrator | 2026-04-17 04:16:36.128392 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-17 04:16:36.128403 | orchestrator | Friday 17 April 2026 04:16:30 +0000 (0:00:00.575) 0:03:51.956 ********** 2026-04-17 04:16:36.128414 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 04:16:36.128425 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 04:16:36.128436 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 04:16:36.128446 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 04:16:36.128479 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:16:36.128490 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 04:16:36.128500 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 04:16:36.128510 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:16:36.128541 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 04:16:36.128552 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 04:16:36.128562 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 04:16:36.128573 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:16:36.128581 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 04:16:36.128587 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 04:16:36.128602 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 04:16:36.128608 | orchestrator | 2026-04-17 04:16:36.128614 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-17 04:16:36.128621 | orchestrator | Friday 17 April 2026 04:16:31 +0000 (0:00:01.234) 0:03:53.191 ********** 2026-04-17 04:16:36.128627 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:16:36.128633 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:16:36.128639 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:16:36.128645 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:16:36.128651 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:16:36.128657 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:16:36.128663 | orchestrator | 2026-04-17 04:16:36.128670 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-17 04:16:36.128676 | orchestrator | Friday 17 April 2026 04:16:32 +0000 (0:00:01.194) 0:03:54.386 ********** 2026-04-17 04:16:36.128682 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:16:36.128688 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:16:36.128694 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:16:36.128700 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:16:36.128706 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:16:36.128712 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:16:36.128718 | orchestrator | 2026-04-17 04:16:36.128724 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-17 04:16:36.128730 | orchestrator | Friday 17 April 2026 04:16:34 +0000 (0:00:01.786) 0:03:56.172 ********** 2026-04-17 04:16:36.128745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:16:36.128756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:16:36.128769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:37.618746 | orchestrator | 2026-04-17 04:16:37.618766 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 04:16:37.618783 | orchestrator | Friday 17 April 2026 04:16:36 +0000 (0:00:02.060) 0:03:58.232 ********** 2026-04-17 04:16:37.618799 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:16:37.618815 | orchestrator | 2026-04-17 04:16:37.618828 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-17 04:16:37.618850 | orchestrator | Friday 17 April 2026 04:16:37 +0000 (0:00:01.092) 0:03:59.325 ********** 2026-04-17 04:16:40.535433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:40.535621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:42.398268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:42.398361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:16:42.398369 | orchestrator | 2026-04-17 04:16:42.398374 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-17 04:16:42.398379 | orchestrator | Friday 17 April 2026 04:16:40 +0000 (0:00:03.116) 0:04:02.441 ********** 2026-04-17 04:16:42.398385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:16:42.398407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:16:42.398412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:16:42.398427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:16:42.398431 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:16:42.398439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:16:42.398443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:16:42.398451 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:16:42.398455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:16:42.398459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:16:42.398469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:16:43.817534 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:16:43.817641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:16:43.817654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:16:43.817681 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:16:43.817688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:16:43.817695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:16:43.817701 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:16:43.817708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:16:43.817715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:16:43.817722 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:16:43.817728 | orchestrator | 2026-04-17 04:16:43.817736 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-17 04:16:43.817744 | orchestrator | Friday 17 April 2026 04:16:42 +0000 (0:00:01.659) 0:04:04.101 ********** 2026-04-17 04:16:43.817775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:16:43.817788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:16:43.817794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:16:43.817802 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:16:43.817808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:16:43.817815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:16:43.817827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:16:51.433576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:16:51.433709 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:16:51.433723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:16:51.433729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:16:51.433734 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:16:51.433740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:16:51.433744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:16:51.433748 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:16:51.433772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:16:51.433782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:16:51.433787 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:16:51.433791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:16:51.433795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:16:51.433799 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:16:51.433803 | orchestrator | 2026-04-17 04:16:51.433807 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 04:16:51.433812 | orchestrator | Friday 17 April 2026 04:16:44 +0000 (0:00:02.088) 0:04:06.190 ********** 2026-04-17 04:16:51.433816 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:16:51.433820 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:16:51.433824 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:16:51.433828 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:16:51.433832 | orchestrator | 2026-04-17 04:16:51.433836 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-17 04:16:51.433840 | orchestrator | Friday 17 April 2026 04:16:45 +0000 (0:00:01.101) 0:04:07.292 ********** 2026-04-17 04:16:51.433843 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:16:51.433847 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 04:16:51.433851 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 04:16:51.433854 | orchestrator | 2026-04-17 04:16:51.433859 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-17 04:16:51.433863 | orchestrator | Friday 17 April 2026 04:16:46 +0000 (0:00:01.252) 0:04:08.544 ********** 2026-04-17 04:16:51.433946 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:16:51.433952 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 04:16:51.433955 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 04:16:51.433959 | orchestrator | 2026-04-17 04:16:51.433963 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-17 04:16:51.433967 | orchestrator | Friday 17 April 2026 04:16:47 +0000 (0:00:00.990) 0:04:09.534 ********** 2026-04-17 04:16:51.433970 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:16:51.433976 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:16:51.433984 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:16:51.433988 | orchestrator | 2026-04-17 04:16:51.433992 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-17 04:16:51.433996 | orchestrator | Friday 17 April 2026 04:16:48 +0000 (0:00:00.558) 0:04:10.092 ********** 2026-04-17 04:16:51.433999 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:16:51.434003 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:16:51.434007 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:16:51.434010 | orchestrator | 2026-04-17 04:16:51.434051 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-17 04:16:51.434056 | orchestrator | Friday 17 April 2026 04:16:48 +0000 (0:00:00.510) 0:04:10.603 ********** 2026-04-17 04:16:51.434060 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-17 04:16:51.434064 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-17 04:16:51.434068 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-17 04:16:51.434071 | orchestrator | 2026-04-17 04:16:51.434075 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-17 04:16:51.434079 | orchestrator | Friday 17 April 2026 04:16:50 +0000 (0:00:01.390) 0:04:11.993 ********** 2026-04-17 04:16:51.434087 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-17 04:17:09.623202 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-17 04:17:09.623328 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-17 04:17:09.623348 | orchestrator | 2026-04-17 04:17:09.623361 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-17 04:17:09.623373 | orchestrator | Friday 17 April 2026 04:16:51 +0000 (0:00:01.149) 0:04:13.143 ********** 2026-04-17 04:17:09.623384 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-17 04:17:09.623395 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-17 04:17:09.623406 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-17 04:17:09.623417 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-17 04:17:09.623429 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-17 04:17:09.623439 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-17 04:17:09.623450 | orchestrator | 2026-04-17 04:17:09.623461 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-17 04:17:09.623471 | orchestrator | Friday 17 April 2026 04:16:55 +0000 (0:00:03.662) 0:04:16.805 ********** 2026-04-17 04:17:09.623482 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:09.623494 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:09.623504 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:09.623514 | orchestrator | 2026-04-17 04:17:09.623524 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-17 04:17:09.623533 | orchestrator | Friday 17 April 2026 04:16:55 +0000 (0:00:00.318) 0:04:17.124 ********** 2026-04-17 04:17:09.623543 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:09.623552 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:09.623564 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:09.623573 | orchestrator | 2026-04-17 04:17:09.623583 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-17 04:17:09.623594 | orchestrator | Friday 17 April 2026 04:16:56 +0000 (0:00:00.620) 0:04:17.745 ********** 2026-04-17 04:17:09.623606 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:17:09.623617 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:17:09.623627 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:17:09.623637 | orchestrator | 2026-04-17 04:17:09.623648 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-17 04:17:09.623658 | orchestrator | Friday 17 April 2026 04:16:57 +0000 (0:00:01.268) 0:04:19.013 ********** 2026-04-17 04:17:09.623669 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-17 04:17:09.623707 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-17 04:17:09.623718 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-17 04:17:09.623729 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-17 04:17:09.623740 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-17 04:17:09.623750 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-17 04:17:09.623761 | orchestrator | 2026-04-17 04:17:09.623771 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-17 04:17:09.623782 | orchestrator | Friday 17 April 2026 04:17:00 +0000 (0:00:03.317) 0:04:22.331 ********** 2026-04-17 04:17:09.623793 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 04:17:09.623804 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 04:17:09.623816 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 04:17:09.623826 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 04:17:09.623837 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:17:09.623847 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 04:17:09.623858 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:17:09.623889 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 04:17:09.623900 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:17:09.623911 | orchestrator | 2026-04-17 04:17:09.623922 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-17 04:17:09.623932 | orchestrator | Friday 17 April 2026 04:17:03 +0000 (0:00:03.274) 0:04:25.605 ********** 2026-04-17 04:17:09.623942 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:09.623952 | orchestrator | 2026-04-17 04:17:09.623962 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-17 04:17:09.623973 | orchestrator | Friday 17 April 2026 04:17:04 +0000 (0:00:00.133) 0:04:25.739 ********** 2026-04-17 04:17:09.623984 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:09.623995 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:09.624004 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:09.624011 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:09.624018 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:09.624026 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:09.624033 | orchestrator | 2026-04-17 04:17:09.624040 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-17 04:17:09.624048 | orchestrator | Friday 17 April 2026 04:17:04 +0000 (0:00:00.795) 0:04:26.535 ********** 2026-04-17 04:17:09.624055 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:17:09.624062 | orchestrator | 2026-04-17 04:17:09.624069 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-17 04:17:09.624077 | orchestrator | Friday 17 April 2026 04:17:05 +0000 (0:00:00.783) 0:04:27.319 ********** 2026-04-17 04:17:09.624084 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:09.624107 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:09.624114 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:09.624127 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:09.624133 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:09.624139 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:09.624145 | orchestrator | 2026-04-17 04:17:09.624151 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-17 04:17:09.624158 | orchestrator | Friday 17 April 2026 04:17:06 +0000 (0:00:00.796) 0:04:28.116 ********** 2026-04-17 04:17:09.624167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:17:09.624185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:17:09.624192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:17:09.624200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:17:09.624217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:14.455764 | orchestrator | 2026-04-17 04:17:14.455774 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-17 04:17:14.455784 | orchestrator | Friday 17 April 2026 04:17:09 +0000 (0:00:03.516) 0:04:31.633 ********** 2026-04-17 04:17:14.455793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:17:14.455814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:17:16.899635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:17:16.899734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:17:16.899746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:17:16.899754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:17:16.899762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:16.899834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:16.899852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:16.899864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:17:16.899933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:17:16.899942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:17:16.899950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:16.899972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:16.899987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:17:39.003802 | orchestrator | 2026-04-17 04:17:39.003926 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-17 04:17:39.003940 | orchestrator | Friday 17 April 2026 04:17:16 +0000 (0:00:06.973) 0:04:38.607 ********** 2026-04-17 04:17:39.003948 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:39.003956 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:39.003962 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:39.003969 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:39.003976 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:39.003983 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:39.003990 | orchestrator | 2026-04-17 04:17:39.003999 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-17 04:17:39.004010 | orchestrator | Friday 17 April 2026 04:17:18 +0000 (0:00:01.467) 0:04:40.074 ********** 2026-04-17 04:17:39.004021 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 04:17:39.004033 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 04:17:39.004051 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 04:17:39.004064 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 04:17:39.004075 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 04:17:39.004086 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 04:17:39.004098 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:39.004109 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 04:17:39.004119 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 04:17:39.004130 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:39.004141 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 04:17:39.004152 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:39.004163 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 04:17:39.004174 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 04:17:39.004186 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 04:17:39.004197 | orchestrator | 2026-04-17 04:17:39.004238 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-17 04:17:39.004250 | orchestrator | Friday 17 April 2026 04:17:22 +0000 (0:00:03.752) 0:04:43.826 ********** 2026-04-17 04:17:39.004262 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:39.004276 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:39.004292 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:39.004303 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:39.004313 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:39.004324 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:39.004333 | orchestrator | 2026-04-17 04:17:39.004344 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-17 04:17:39.004354 | orchestrator | Friday 17 April 2026 04:17:22 +0000 (0:00:00.712) 0:04:44.539 ********** 2026-04-17 04:17:39.004365 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 04:17:39.004378 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 04:17:39.004389 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 04:17:39.004400 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 04:17:39.004412 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 04:17:39.004424 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 04:17:39.004436 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 04:17:39.004466 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 04:17:39.004475 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 04:17:39.004484 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 04:17:39.004491 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:39.004500 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 04:17:39.004507 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:39.004516 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 04:17:39.004524 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:39.004531 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 04:17:39.004556 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 04:17:39.004565 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 04:17:39.004576 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 04:17:39.004587 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 04:17:39.004599 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 04:17:39.004610 | orchestrator | 2026-04-17 04:17:39.004621 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-17 04:17:39.004632 | orchestrator | Friday 17 April 2026 04:17:28 +0000 (0:00:05.365) 0:04:49.904 ********** 2026-04-17 04:17:39.004644 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 04:17:39.004665 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 04:17:39.004675 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 04:17:39.004684 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 04:17:39.004695 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 04:17:39.004706 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 04:17:39.004717 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 04:17:39.004727 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 04:17:39.004737 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 04:17:39.004747 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 04:17:39.004758 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 04:17:39.004768 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 04:17:39.004779 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 04:17:39.004790 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:39.004801 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 04:17:39.004812 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:39.004824 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 04:17:39.004835 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 04:17:39.004847 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:39.004858 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 04:17:39.004869 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 04:17:39.004903 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 04:17:39.004915 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 04:17:39.004927 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 04:17:39.004934 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 04:17:39.004941 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 04:17:39.004947 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 04:17:39.004954 | orchestrator | 2026-04-17 04:17:39.004961 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-17 04:17:39.004968 | orchestrator | Friday 17 April 2026 04:17:35 +0000 (0:00:06.879) 0:04:56.783 ********** 2026-04-17 04:17:39.004982 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:39.004988 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:39.004995 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:39.005002 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:39.005008 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:39.005015 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:39.005021 | orchestrator | 2026-04-17 04:17:39.005028 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-17 04:17:39.005035 | orchestrator | Friday 17 April 2026 04:17:36 +0000 (0:00:00.941) 0:04:57.725 ********** 2026-04-17 04:17:39.005041 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:39.005048 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:39.005055 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:39.005069 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:39.005076 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:39.005082 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:39.005089 | orchestrator | 2026-04-17 04:17:39.005095 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-17 04:17:39.005102 | orchestrator | Friday 17 April 2026 04:17:36 +0000 (0:00:00.700) 0:04:58.426 ********** 2026-04-17 04:17:39.005109 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:39.005116 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:17:39.005130 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:40.142854 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:40.142960 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:17:40.142967 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:17:40.142972 | orchestrator | 2026-04-17 04:17:40.142978 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-17 04:17:40.142984 | orchestrator | Friday 17 April 2026 04:17:38 +0000 (0:00:02.281) 0:05:00.707 ********** 2026-04-17 04:17:40.142990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:17:40.142997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:17:40.143002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:17:40.143021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:17:40.143043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:17:40.143048 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:40.143064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:17:40.143068 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:40.143073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 04:17:40.143077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 04:17:40.143082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 04:17:40.143090 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:40.143100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:17:40.143110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:17:43.663424 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:43.663524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:17:43.663537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:17:43.663544 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:43.663551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 04:17:43.663558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:17:43.663595 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:43.663603 | orchestrator | 2026-04-17 04:17:43.663610 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-17 04:17:43.663618 | orchestrator | Friday 17 April 2026 04:17:40 +0000 (0:00:01.478) 0:05:02.186 ********** 2026-04-17 04:17:43.663625 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-17 04:17:43.663633 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-17 04:17:43.663639 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:17:43.663646 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-17 04:17:43.663664 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-17 04:17:43.663671 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:17:43.663677 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-17 04:17:43.663683 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-17 04:17:43.663689 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:17:43.663696 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-17 04:17:43.663702 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-17 04:17:43.663708 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:17:43.663714 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-17 04:17:43.663720 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-17 04:17:43.663726 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:17:43.663732 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-17 04:17:43.663739 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-17 04:17:43.663745 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:17:43.663751 | orchestrator | 2026-04-17 04:17:43.663757 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-17 04:17:43.663763 | orchestrator | Friday 17 April 2026 04:17:41 +0000 (0:00:00.893) 0:05:03.079 ********** 2026-04-17 04:17:43.663784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:17:43.663793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:17:43.663800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 04:17:43.663816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:17:43.663842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:17:43.663855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 04:18:36.663865 | orchestrator | 2026-04-17 04:18:36.663873 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 04:18:36.663926 | orchestrator | Friday 17 April 2026 04:17:44 +0000 (0:00:02.773) 0:05:05.852 ********** 2026-04-17 04:18:36.663936 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:18:36.663943 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:18:36.663950 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:18:36.663960 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:18:36.663970 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:18:36.663980 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:18:36.663991 | orchestrator | 2026-04-17 04:18:36.664001 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 04:18:36.664012 | orchestrator | Friday 17 April 2026 04:17:44 +0000 (0:00:00.785) 0:05:06.638 ********** 2026-04-17 04:18:36.664022 | orchestrator | 2026-04-17 04:18:36.664033 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 04:18:36.664043 | orchestrator | Friday 17 April 2026 04:17:45 +0000 (0:00:00.144) 0:05:06.783 ********** 2026-04-17 04:18:36.664055 | orchestrator | 2026-04-17 04:18:36.664065 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 04:18:36.664077 | orchestrator | Friday 17 April 2026 04:17:45 +0000 (0:00:00.137) 0:05:06.920 ********** 2026-04-17 04:18:36.664084 | orchestrator | 2026-04-17 04:18:36.664096 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 04:18:36.664102 | orchestrator | Friday 17 April 2026 04:17:45 +0000 (0:00:00.139) 0:05:07.060 ********** 2026-04-17 04:18:36.664108 | orchestrator | 2026-04-17 04:18:36.664114 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 04:18:36.664121 | orchestrator | Friday 17 April 2026 04:17:45 +0000 (0:00:00.141) 0:05:07.202 ********** 2026-04-17 04:18:36.664127 | orchestrator | 2026-04-17 04:18:36.664132 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 04:18:36.664138 | orchestrator | Friday 17 April 2026 04:17:45 +0000 (0:00:00.292) 0:05:07.494 ********** 2026-04-17 04:18:36.664144 | orchestrator | 2026-04-17 04:18:36.664150 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-17 04:18:36.664156 | orchestrator | Friday 17 April 2026 04:17:45 +0000 (0:00:00.138) 0:05:07.633 ********** 2026-04-17 04:18:36.664162 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:18:36.664168 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:18:36.664174 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:18:36.664180 | orchestrator | 2026-04-17 04:18:36.664186 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-17 04:18:36.664192 | orchestrator | Friday 17 April 2026 04:17:55 +0000 (0:00:09.833) 0:05:17.467 ********** 2026-04-17 04:18:36.664198 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:18:36.664204 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:18:36.664210 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:18:36.664216 | orchestrator | 2026-04-17 04:18:36.664223 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-17 04:18:36.664229 | orchestrator | Friday 17 April 2026 04:18:10 +0000 (0:00:14.804) 0:05:32.271 ********** 2026-04-17 04:18:36.664242 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:18:36.664248 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:18:36.664254 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:18:36.664260 | orchestrator | 2026-04-17 04:18:36.664272 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-17 04:20:55.544195 | orchestrator | Friday 17 April 2026 04:18:36 +0000 (0:00:26.092) 0:05:58.363 ********** 2026-04-17 04:20:55.544298 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:20:55.544311 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:20:55.544319 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:20:55.544327 | orchestrator | 2026-04-17 04:20:55.544336 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-17 04:20:55.544343 | orchestrator | Friday 17 April 2026 04:19:22 +0000 (0:00:45.416) 0:06:43.780 ********** 2026-04-17 04:20:55.544351 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:20:55.544358 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:20:55.544366 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:20:55.544373 | orchestrator | 2026-04-17 04:20:55.544381 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-17 04:20:55.544388 | orchestrator | Friday 17 April 2026 04:19:22 +0000 (0:00:00.768) 0:06:44.549 ********** 2026-04-17 04:20:55.544395 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:20:55.544414 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:20:55.544422 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:20:55.544438 | orchestrator | 2026-04-17 04:20:55.544445 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-17 04:20:55.544453 | orchestrator | Friday 17 April 2026 04:19:23 +0000 (0:00:00.796) 0:06:45.345 ********** 2026-04-17 04:20:55.544460 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:20:55.544467 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:20:55.544474 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:20:55.544482 | orchestrator | 2026-04-17 04:20:55.544490 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-17 04:20:55.544498 | orchestrator | Friday 17 April 2026 04:19:48 +0000 (0:00:25.061) 0:07:10.407 ********** 2026-04-17 04:20:55.544505 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:20:55.544512 | orchestrator | 2026-04-17 04:20:55.544520 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-17 04:20:55.544527 | orchestrator | Friday 17 April 2026 04:19:48 +0000 (0:00:00.136) 0:07:10.544 ********** 2026-04-17 04:20:55.544534 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:20:55.544541 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:20:55.544548 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:20:55.544556 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:20:55.544563 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:20:55.544571 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-17 04:20:55.544579 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 04:20:55.544586 | orchestrator | 2026-04-17 04:20:55.544593 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-17 04:20:55.544601 | orchestrator | Friday 17 April 2026 04:20:11 +0000 (0:00:22.714) 0:07:33.259 ********** 2026-04-17 04:20:55.544608 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:20:55.544615 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:20:55.544622 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:20:55.544630 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:20:55.544637 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:20:55.544644 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:20:55.544651 | orchestrator | 2026-04-17 04:20:55.544659 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-17 04:20:55.544666 | orchestrator | Friday 17 April 2026 04:20:20 +0000 (0:00:08.466) 0:07:41.725 ********** 2026-04-17 04:20:55.544673 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:20:55.544702 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:20:55.544711 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:20:55.544725 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:20:55.544737 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:20:55.544749 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-04-17 04:20:55.544761 | orchestrator | 2026-04-17 04:20:55.544773 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 04:20:55.544802 | orchestrator | Friday 17 April 2026 04:20:24 +0000 (0:00:04.133) 0:07:45.859 ********** 2026-04-17 04:20:55.544815 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 04:20:55.544827 | orchestrator | 2026-04-17 04:20:55.544840 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 04:20:55.544851 | orchestrator | Friday 17 April 2026 04:20:37 +0000 (0:00:12.905) 0:07:58.764 ********** 2026-04-17 04:20:55.544862 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 04:20:55.544873 | orchestrator | 2026-04-17 04:20:55.544885 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-17 04:20:55.544948 | orchestrator | Friday 17 April 2026 04:20:38 +0000 (0:00:01.620) 0:08:00.385 ********** 2026-04-17 04:20:55.544962 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:20:55.544973 | orchestrator | 2026-04-17 04:20:55.544983 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-17 04:20:55.544995 | orchestrator | Friday 17 April 2026 04:20:40 +0000 (0:00:01.536) 0:08:01.921 ********** 2026-04-17 04:20:55.545004 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 04:20:55.545015 | orchestrator | 2026-04-17 04:20:55.545027 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-17 04:20:55.545038 | orchestrator | Friday 17 April 2026 04:20:50 +0000 (0:00:10.107) 0:08:12.029 ********** 2026-04-17 04:20:55.545049 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:20:55.545061 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:20:55.545072 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:20:55.545083 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:20:55.545094 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:20:55.545104 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:20:55.545115 | orchestrator | 2026-04-17 04:20:55.545127 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-17 04:20:55.545138 | orchestrator | 2026-04-17 04:20:55.545149 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-17 04:20:55.545183 | orchestrator | Friday 17 April 2026 04:20:51 +0000 (0:00:01.508) 0:08:13.537 ********** 2026-04-17 04:20:55.545194 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:20:55.545207 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:20:55.545219 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:20:55.545231 | orchestrator | 2026-04-17 04:20:55.545243 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-17 04:20:55.545255 | orchestrator | 2026-04-17 04:20:55.545267 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-17 04:20:55.545279 | orchestrator | Friday 17 April 2026 04:20:52 +0000 (0:00:00.908) 0:08:14.446 ********** 2026-04-17 04:20:55.545290 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:20:55.545297 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:20:55.545304 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:20:55.545312 | orchestrator | 2026-04-17 04:20:55.545319 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-17 04:20:55.545326 | orchestrator | 2026-04-17 04:20:55.545333 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-17 04:20:55.545340 | orchestrator | Friday 17 April 2026 04:20:53 +0000 (0:00:00.767) 0:08:15.213 ********** 2026-04-17 04:20:55.545347 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-17 04:20:55.545355 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-17 04:20:55.545373 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-17 04:20:55.545381 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-17 04:20:55.545388 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-17 04:20:55.545396 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-17 04:20:55.545403 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:20:55.545410 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-17 04:20:55.545418 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-17 04:20:55.545425 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-17 04:20:55.545432 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-17 04:20:55.545439 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-17 04:20:55.545446 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-17 04:20:55.545453 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:20:55.545460 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-17 04:20:55.545468 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-17 04:20:55.545475 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-17 04:20:55.545482 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-17 04:20:55.545489 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-17 04:20:55.545496 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-17 04:20:55.545503 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:20:55.545510 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-17 04:20:55.545518 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-17 04:20:55.545525 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-17 04:20:55.545532 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-17 04:20:55.545539 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-17 04:20:55.545546 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-17 04:20:55.545553 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:20:55.545561 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-17 04:20:55.545568 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-17 04:20:55.545575 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-17 04:20:55.545582 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-17 04:20:55.545597 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-17 04:20:55.545605 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-17 04:20:55.545612 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:20:55.545619 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-17 04:20:55.545626 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-17 04:20:55.545633 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-17 04:20:55.545641 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-17 04:20:55.545648 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-17 04:20:55.545655 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-17 04:20:55.545663 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:20:55.545670 | orchestrator | 2026-04-17 04:20:55.545691 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-17 04:20:55.545699 | orchestrator | 2026-04-17 04:20:55.545714 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-17 04:20:55.545722 | orchestrator | Friday 17 April 2026 04:20:54 +0000 (0:00:01.433) 0:08:16.646 ********** 2026-04-17 04:20:55.545729 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-17 04:20:55.545742 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-17 04:20:55.545749 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:20:55.545756 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-17 04:20:55.545764 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-17 04:20:55.545771 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:20:55.545778 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-17 04:20:55.545785 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-17 04:20:55.545793 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:20:55.545800 | orchestrator | 2026-04-17 04:20:55.545815 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-17 04:20:57.298487 | orchestrator | 2026-04-17 04:20:57.298597 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-17 04:20:57.298614 | orchestrator | Friday 17 April 2026 04:20:55 +0000 (0:00:00.600) 0:08:17.247 ********** 2026-04-17 04:20:57.298626 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:20:57.298638 | orchestrator | 2026-04-17 04:20:57.298650 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-17 04:20:57.298660 | orchestrator | 2026-04-17 04:20:57.298671 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-17 04:20:57.298682 | orchestrator | Friday 17 April 2026 04:20:56 +0000 (0:00:00.873) 0:08:18.121 ********** 2026-04-17 04:20:57.298693 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:20:57.298704 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:20:57.298714 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:20:57.298725 | orchestrator | 2026-04-17 04:20:57.298736 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:20:57.298747 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:20:57.298761 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-04-17 04:20:57.298772 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-17 04:20:57.298783 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-17 04:20:57.298794 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-17 04:20:57.298805 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-17 04:20:57.298816 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-17 04:20:57.298826 | orchestrator | 2026-04-17 04:20:57.298837 | orchestrator | 2026-04-17 04:20:57.298848 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:20:57.298859 | orchestrator | Friday 17 April 2026 04:20:56 +0000 (0:00:00.479) 0:08:18.601 ********** 2026-04-17 04:20:57.298870 | orchestrator | =============================================================================== 2026-04-17 04:20:57.298880 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 45.42s 2026-04-17 04:20:57.298891 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.34s 2026-04-17 04:20:57.298996 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.09s 2026-04-17 04:20:57.299008 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.06s 2026-04-17 04:20:57.299022 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.71s 2026-04-17 04:20:57.299065 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.52s 2026-04-17 04:20:57.299083 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.88s 2026-04-17 04:20:57.299102 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.91s 2026-04-17 04:20:57.299163 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.80s 2026-04-17 04:20:57.299191 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.67s 2026-04-17 04:20:57.299245 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.91s 2026-04-17 04:20:57.299263 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.71s 2026-04-17 04:20:57.299282 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.23s 2026-04-17 04:20:57.299300 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.50s 2026-04-17 04:20:57.299318 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.11s 2026-04-17 04:20:57.299335 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 9.83s 2026-04-17 04:20:57.299354 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.47s 2026-04-17 04:20:57.299372 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.25s 2026-04-17 04:20:57.299390 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 6.97s 2026-04-17 04:20:57.299410 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 6.95s 2026-04-17 04:20:59.715784 | orchestrator | 2026-04-17 04:20:59 | INFO  | Task 0459ab17-71bd-4f95-a0d4-29e41cbe9796 (horizon) was prepared for execution. 2026-04-17 04:20:59.715882 | orchestrator | 2026-04-17 04:20:59 | INFO  | It takes a moment until task 0459ab17-71bd-4f95-a0d4-29e41cbe9796 (horizon) has been started and output is visible here. 2026-04-17 04:21:07.037243 | orchestrator | 2026-04-17 04:21:07.037395 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:21:07.037447 | orchestrator | 2026-04-17 04:21:07.037463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:21:07.037478 | orchestrator | Friday 17 April 2026 04:21:03 +0000 (0:00:00.260) 0:00:00.261 ********** 2026-04-17 04:21:07.037492 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:07.037507 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:07.037522 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:07.037536 | orchestrator | 2026-04-17 04:21:07.037550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:21:07.037565 | orchestrator | Friday 17 April 2026 04:21:04 +0000 (0:00:00.303) 0:00:00.564 ********** 2026-04-17 04:21:07.037580 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-17 04:21:07.037592 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-17 04:21:07.037600 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-17 04:21:07.037609 | orchestrator | 2026-04-17 04:21:07.037617 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-17 04:21:07.037626 | orchestrator | 2026-04-17 04:21:07.037634 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 04:21:07.037642 | orchestrator | Friday 17 April 2026 04:21:04 +0000 (0:00:00.468) 0:00:01.033 ********** 2026-04-17 04:21:07.037651 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:21:07.037660 | orchestrator | 2026-04-17 04:21:07.037668 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-17 04:21:07.037676 | orchestrator | Friday 17 April 2026 04:21:05 +0000 (0:00:00.536) 0:00:01.569 ********** 2026-04-17 04:21:07.037706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:21:07.037759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:21:07.037785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:21:07.037797 | orchestrator | 2026-04-17 04:21:07.037806 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-17 04:21:07.037816 | orchestrator | Friday 17 April 2026 04:21:06 +0000 (0:00:01.176) 0:00:02.745 ********** 2026-04-17 04:21:07.037825 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:07.037836 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:07.037845 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:07.037855 | orchestrator | 2026-04-17 04:21:07.037864 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 04:21:07.037874 | orchestrator | Friday 17 April 2026 04:21:06 +0000 (0:00:00.469) 0:00:03.215 ********** 2026-04-17 04:21:07.037889 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 04:21:13.047758 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 04:21:13.047840 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 04:21:13.047847 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 04:21:13.047852 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 04:21:13.047857 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 04:21:13.047862 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-17 04:21:13.047867 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 04:21:13.047872 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 04:21:13.047990 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 04:21:13.047998 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 04:21:13.048003 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 04:21:13.048008 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 04:21:13.048013 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 04:21:13.048017 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-17 04:21:13.048022 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 04:21:13.048026 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 04:21:13.048031 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 04:21:13.048036 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 04:21:13.048040 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 04:21:13.048045 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 04:21:13.048049 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 04:21:13.048054 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-17 04:21:13.048058 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 04:21:13.048064 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-17 04:21:13.048070 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-17 04:21:13.048075 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-17 04:21:13.048079 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-17 04:21:13.048084 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-17 04:21:13.048101 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-17 04:21:13.048105 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-17 04:21:13.048110 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-17 04:21:13.048115 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-17 04:21:13.048121 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-17 04:21:13.048126 | orchestrator | 2026-04-17 04:21:13.048131 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:13.048137 | orchestrator | Friday 17 April 2026 04:21:07 +0000 (0:00:00.797) 0:00:04.013 ********** 2026-04-17 04:21:13.048141 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:13.048147 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:13.048151 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:13.048163 | orchestrator | 2026-04-17 04:21:13.048167 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:13.048172 | orchestrator | Friday 17 April 2026 04:21:08 +0000 (0:00:00.334) 0:00:04.347 ********** 2026-04-17 04:21:13.048177 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048182 | orchestrator | 2026-04-17 04:21:13.048198 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:13.048203 | orchestrator | Friday 17 April 2026 04:21:08 +0000 (0:00:00.289) 0:00:04.637 ********** 2026-04-17 04:21:13.048208 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048212 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:13.048217 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:13.048221 | orchestrator | 2026-04-17 04:21:13.048226 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:13.048230 | orchestrator | Friday 17 April 2026 04:21:08 +0000 (0:00:00.310) 0:00:04.947 ********** 2026-04-17 04:21:13.048235 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:13.048240 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:13.048244 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:13.048249 | orchestrator | 2026-04-17 04:21:13.048253 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:13.048258 | orchestrator | Friday 17 April 2026 04:21:08 +0000 (0:00:00.323) 0:00:05.271 ********** 2026-04-17 04:21:13.048262 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048267 | orchestrator | 2026-04-17 04:21:13.048271 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:13.048276 | orchestrator | Friday 17 April 2026 04:21:09 +0000 (0:00:00.136) 0:00:05.407 ********** 2026-04-17 04:21:13.048281 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048285 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:13.048290 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:13.048295 | orchestrator | 2026-04-17 04:21:13.048299 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:13.048304 | orchestrator | Friday 17 April 2026 04:21:09 +0000 (0:00:00.299) 0:00:05.707 ********** 2026-04-17 04:21:13.048308 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:13.048313 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:13.048319 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:13.048324 | orchestrator | 2026-04-17 04:21:13.048330 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:13.048335 | orchestrator | Friday 17 April 2026 04:21:09 +0000 (0:00:00.517) 0:00:06.225 ********** 2026-04-17 04:21:13.048341 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048346 | orchestrator | 2026-04-17 04:21:13.048352 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:13.048357 | orchestrator | Friday 17 April 2026 04:21:10 +0000 (0:00:00.145) 0:00:06.371 ********** 2026-04-17 04:21:13.048362 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048368 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:13.048373 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:13.048379 | orchestrator | 2026-04-17 04:21:13.048384 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:13.048389 | orchestrator | Friday 17 April 2026 04:21:10 +0000 (0:00:00.304) 0:00:06.676 ********** 2026-04-17 04:21:13.048394 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:13.048400 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:13.048405 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:13.048410 | orchestrator | 2026-04-17 04:21:13.048416 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:13.048421 | orchestrator | Friday 17 April 2026 04:21:10 +0000 (0:00:00.322) 0:00:06.998 ********** 2026-04-17 04:21:13.048427 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048432 | orchestrator | 2026-04-17 04:21:13.048437 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:13.048443 | orchestrator | Friday 17 April 2026 04:21:10 +0000 (0:00:00.127) 0:00:07.125 ********** 2026-04-17 04:21:13.048453 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048458 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:13.048464 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:13.048469 | orchestrator | 2026-04-17 04:21:13.048475 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:13.048479 | orchestrator | Friday 17 April 2026 04:21:11 +0000 (0:00:00.499) 0:00:07.625 ********** 2026-04-17 04:21:13.048484 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:13.048489 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:13.048494 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:13.048499 | orchestrator | 2026-04-17 04:21:13.048504 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:13.048512 | orchestrator | Friday 17 April 2026 04:21:11 +0000 (0:00:00.325) 0:00:07.950 ********** 2026-04-17 04:21:13.048517 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048522 | orchestrator | 2026-04-17 04:21:13.048527 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:13.048531 | orchestrator | Friday 17 April 2026 04:21:11 +0000 (0:00:00.146) 0:00:08.097 ********** 2026-04-17 04:21:13.048536 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048541 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:13.048546 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:13.048551 | orchestrator | 2026-04-17 04:21:13.048556 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:13.048561 | orchestrator | Friday 17 April 2026 04:21:12 +0000 (0:00:00.310) 0:00:08.407 ********** 2026-04-17 04:21:13.048566 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:13.048571 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:13.048575 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:13.048580 | orchestrator | 2026-04-17 04:21:13.048585 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:13.048589 | orchestrator | Friday 17 April 2026 04:21:12 +0000 (0:00:00.291) 0:00:08.699 ********** 2026-04-17 04:21:13.048595 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048599 | orchestrator | 2026-04-17 04:21:13.048604 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:13.048609 | orchestrator | Friday 17 April 2026 04:21:12 +0000 (0:00:00.317) 0:00:09.017 ********** 2026-04-17 04:21:13.048614 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:13.048619 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:13.048624 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:13.048628 | orchestrator | 2026-04-17 04:21:13.048633 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:13.048641 | orchestrator | Friday 17 April 2026 04:21:13 +0000 (0:00:00.308) 0:00:09.325 ********** 2026-04-17 04:21:27.232725 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:27.232880 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:27.232939 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:27.232959 | orchestrator | 2026-04-17 04:21:27.232979 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:27.232999 | orchestrator | Friday 17 April 2026 04:21:13 +0000 (0:00:00.318) 0:00:09.644 ********** 2026-04-17 04:21:27.233017 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.233036 | orchestrator | 2026-04-17 04:21:27.233054 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:27.233072 | orchestrator | Friday 17 April 2026 04:21:13 +0000 (0:00:00.149) 0:00:09.793 ********** 2026-04-17 04:21:27.233091 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.233110 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:27.233129 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:27.233147 | orchestrator | 2026-04-17 04:21:27.233165 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:27.233182 | orchestrator | Friday 17 April 2026 04:21:13 +0000 (0:00:00.315) 0:00:10.109 ********** 2026-04-17 04:21:27.233234 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:27.233257 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:27.233277 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:27.233296 | orchestrator | 2026-04-17 04:21:27.233316 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:27.233330 | orchestrator | Friday 17 April 2026 04:21:14 +0000 (0:00:00.593) 0:00:10.702 ********** 2026-04-17 04:21:27.233342 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.233354 | orchestrator | 2026-04-17 04:21:27.233366 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:27.233378 | orchestrator | Friday 17 April 2026 04:21:14 +0000 (0:00:00.131) 0:00:10.834 ********** 2026-04-17 04:21:27.233391 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.233403 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:27.233416 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:27.233428 | orchestrator | 2026-04-17 04:21:27.233440 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:27.233452 | orchestrator | Friday 17 April 2026 04:21:14 +0000 (0:00:00.309) 0:00:11.143 ********** 2026-04-17 04:21:27.233465 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:27.233477 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:27.233489 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:27.233501 | orchestrator | 2026-04-17 04:21:27.233513 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:27.233526 | orchestrator | Friday 17 April 2026 04:21:15 +0000 (0:00:00.339) 0:00:11.482 ********** 2026-04-17 04:21:27.233538 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.233550 | orchestrator | 2026-04-17 04:21:27.233563 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:27.233575 | orchestrator | Friday 17 April 2026 04:21:15 +0000 (0:00:00.141) 0:00:11.624 ********** 2026-04-17 04:21:27.233587 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.233600 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:27.233610 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:27.233621 | orchestrator | 2026-04-17 04:21:27.233632 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 04:21:27.233642 | orchestrator | Friday 17 April 2026 04:21:15 +0000 (0:00:00.498) 0:00:12.122 ********** 2026-04-17 04:21:27.233653 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:21:27.233663 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:21:27.233674 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:21:27.233684 | orchestrator | 2026-04-17 04:21:27.233695 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 04:21:27.233705 | orchestrator | Friday 17 April 2026 04:21:16 +0000 (0:00:00.315) 0:00:12.437 ********** 2026-04-17 04:21:27.233717 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.233728 | orchestrator | 2026-04-17 04:21:27.233738 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 04:21:27.233749 | orchestrator | Friday 17 April 2026 04:21:16 +0000 (0:00:00.133) 0:00:12.571 ********** 2026-04-17 04:21:27.233759 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.233770 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:27.233780 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:27.233790 | orchestrator | 2026-04-17 04:21:27.233819 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-17 04:21:27.233830 | orchestrator | Friday 17 April 2026 04:21:16 +0000 (0:00:00.302) 0:00:12.873 ********** 2026-04-17 04:21:27.233841 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:21:27.233859 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:21:27.233876 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:21:27.234121 | orchestrator | 2026-04-17 04:21:27.234156 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-17 04:21:27.234175 | orchestrator | Friday 17 April 2026 04:21:18 +0000 (0:00:01.791) 0:00:14.665 ********** 2026-04-17 04:21:27.234192 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 04:21:27.234232 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 04:21:27.234250 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 04:21:27.234266 | orchestrator | 2026-04-17 04:21:27.234284 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-17 04:21:27.234302 | orchestrator | Friday 17 April 2026 04:21:20 +0000 (0:00:01.922) 0:00:16.588 ********** 2026-04-17 04:21:27.234320 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 04:21:27.234341 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 04:21:27.234360 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 04:21:27.234377 | orchestrator | 2026-04-17 04:21:27.234396 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-17 04:21:27.234445 | orchestrator | Friday 17 April 2026 04:21:22 +0000 (0:00:01.849) 0:00:18.438 ********** 2026-04-17 04:21:27.234462 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 04:21:27.234479 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 04:21:27.234497 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 04:21:27.234514 | orchestrator | 2026-04-17 04:21:27.234530 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-17 04:21:27.234547 | orchestrator | Friday 17 April 2026 04:21:23 +0000 (0:00:01.479) 0:00:19.917 ********** 2026-04-17 04:21:27.234565 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.234583 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:27.234600 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:27.234617 | orchestrator | 2026-04-17 04:21:27.234633 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-17 04:21:27.234650 | orchestrator | Friday 17 April 2026 04:21:24 +0000 (0:00:00.651) 0:00:20.568 ********** 2026-04-17 04:21:27.234666 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.234686 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:27.234704 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:27.234720 | orchestrator | 2026-04-17 04:21:27.234737 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 04:21:27.234753 | orchestrator | Friday 17 April 2026 04:21:24 +0000 (0:00:00.326) 0:00:20.894 ********** 2026-04-17 04:21:27.234771 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:21:27.234788 | orchestrator | 2026-04-17 04:21:27.234804 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-17 04:21:27.234821 | orchestrator | Friday 17 April 2026 04:21:25 +0000 (0:00:00.610) 0:00:21.505 ********** 2026-04-17 04:21:27.234865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:21:27.234954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:21:27.874490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:21:27.874605 | orchestrator | 2026-04-17 04:21:27.874618 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-17 04:21:27.874627 | orchestrator | Friday 17 April 2026 04:21:27 +0000 (0:00:02.000) 0:00:23.505 ********** 2026-04-17 04:21:27.874666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 04:21:27.874684 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:27.874701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 04:21:27.874710 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:27.874726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 04:21:30.380426 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:30.380515 | orchestrator | 2026-04-17 04:21:30.380526 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-17 04:21:30.380534 | orchestrator | Friday 17 April 2026 04:21:27 +0000 (0:00:00.646) 0:00:24.152 ********** 2026-04-17 04:21:30.380571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 04:21:30.380580 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:21:30.380604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 04:21:30.380634 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:21:30.380641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 04:21:30.380648 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:21:30.380653 | orchestrator | 2026-04-17 04:21:30.380659 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-17 04:21:30.380692 | orchestrator | Friday 17 April 2026 04:21:28 +0000 (0:00:00.864) 0:00:25.016 ********** 2026-04-17 04:21:30.380706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:22:12.149125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:22:12.149248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 04:22:12.149275 | orchestrator | 2026-04-17 04:22:12.149280 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 04:22:12.149286 | orchestrator | Friday 17 April 2026 04:21:30 +0000 (0:00:01.641) 0:00:26.657 ********** 2026-04-17 04:22:12.149290 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:22:12.149295 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:22:12.149298 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:22:12.149302 | orchestrator | 2026-04-17 04:22:12.149306 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 04:22:12.149310 | orchestrator | Friday 17 April 2026 04:21:30 +0000 (0:00:00.345) 0:00:27.003 ********** 2026-04-17 04:22:12.149315 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:22:12.149319 | orchestrator | 2026-04-17 04:22:12.149322 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-17 04:22:12.149326 | orchestrator | Friday 17 April 2026 04:21:31 +0000 (0:00:00.541) 0:00:27.544 ********** 2026-04-17 04:22:12.149330 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:22:12.149334 | orchestrator | 2026-04-17 04:22:12.149337 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-17 04:22:12.149341 | orchestrator | Friday 17 April 2026 04:21:33 +0000 (0:00:02.079) 0:00:29.624 ********** 2026-04-17 04:22:12.149345 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:22:12.149351 | orchestrator | 2026-04-17 04:22:12.149357 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-17 04:22:12.149367 | orchestrator | Friday 17 April 2026 04:21:35 +0000 (0:00:02.561) 0:00:32.185 ********** 2026-04-17 04:22:12.149375 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:22:12.149381 | orchestrator | 2026-04-17 04:22:12.149387 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 04:22:12.149392 | orchestrator | Friday 17 April 2026 04:21:51 +0000 (0:00:15.348) 0:00:47.533 ********** 2026-04-17 04:22:12.149404 | orchestrator | 2026-04-17 04:22:12.149410 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 04:22:12.149416 | orchestrator | Friday 17 April 2026 04:21:51 +0000 (0:00:00.071) 0:00:47.605 ********** 2026-04-17 04:22:12.149422 | orchestrator | 2026-04-17 04:22:12.149428 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 04:22:12.149434 | orchestrator | Friday 17 April 2026 04:21:51 +0000 (0:00:00.066) 0:00:47.672 ********** 2026-04-17 04:22:12.149440 | orchestrator | 2026-04-17 04:22:12.149445 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-17 04:22:12.149452 | orchestrator | Friday 17 April 2026 04:21:51 +0000 (0:00:00.073) 0:00:47.746 ********** 2026-04-17 04:22:12.149458 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:22:12.149464 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:22:12.149470 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:22:12.149476 | orchestrator | 2026-04-17 04:22:12.149482 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:22:12.149486 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 04:22:12.149492 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-17 04:22:12.149496 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-17 04:22:12.149499 | orchestrator | 2026-04-17 04:22:12.149503 | orchestrator | 2026-04-17 04:22:12.149507 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:22:12.149510 | orchestrator | Friday 17 April 2026 04:22:12 +0000 (0:00:20.666) 0:01:08.412 ********** 2026-04-17 04:22:12.149514 | orchestrator | =============================================================================== 2026-04-17 04:22:12.149518 | orchestrator | horizon : Restart horizon container ------------------------------------ 20.67s 2026-04-17 04:22:12.149522 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.35s 2026-04-17 04:22:12.149525 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.56s 2026-04-17 04:22:12.149529 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.08s 2026-04-17 04:22:12.149533 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.00s 2026-04-17 04:22:12.149536 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.92s 2026-04-17 04:22:12.149541 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.85s 2026-04-17 04:22:12.149548 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.79s 2026-04-17 04:22:12.149552 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.64s 2026-04-17 04:22:12.149556 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.48s 2026-04-17 04:22:12.149560 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.18s 2026-04-17 04:22:12.149563 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.86s 2026-04-17 04:22:12.149567 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2026-04-17 04:22:12.149576 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.65s 2026-04-17 04:22:12.708422 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2026-04-17 04:22:12.708540 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-04-17 04:22:12.708556 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-04-17 04:22:12.708565 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-04-17 04:22:12.708613 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-04-17 04:22:12.708623 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-04-17 04:22:15.379978 | orchestrator | 2026-04-17 04:22:15 | INFO  | Task aa02890d-093c-4666-8c98-f4f6e437e31e (skyline) was prepared for execution. 2026-04-17 04:22:15.380070 | orchestrator | 2026-04-17 04:22:15 | INFO  | It takes a moment until task aa02890d-093c-4666-8c98-f4f6e437e31e (skyline) has been started and output is visible here. 2026-04-17 04:22:45.121273 | orchestrator | 2026-04-17 04:22:45.121369 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:22:45.121381 | orchestrator | 2026-04-17 04:22:45.121389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:22:45.121397 | orchestrator | Friday 17 April 2026 04:22:19 +0000 (0:00:00.296) 0:00:00.296 ********** 2026-04-17 04:22:45.121405 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:22:45.121413 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:22:45.121421 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:22:45.121428 | orchestrator | 2026-04-17 04:22:45.121436 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:22:45.121443 | orchestrator | Friday 17 April 2026 04:22:20 +0000 (0:00:00.314) 0:00:00.611 ********** 2026-04-17 04:22:45.121451 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-17 04:22:45.121459 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-17 04:22:45.121466 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-17 04:22:45.121474 | orchestrator | 2026-04-17 04:22:45.121481 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-17 04:22:45.121488 | orchestrator | 2026-04-17 04:22:45.121496 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-17 04:22:45.121503 | orchestrator | Friday 17 April 2026 04:22:20 +0000 (0:00:00.539) 0:00:01.151 ********** 2026-04-17 04:22:45.121511 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:22:45.121519 | orchestrator | 2026-04-17 04:22:45.121526 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-04-17 04:22:45.121534 | orchestrator | Friday 17 April 2026 04:22:21 +0000 (0:00:00.550) 0:00:01.702 ********** 2026-04-17 04:22:45.121541 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-04-17 04:22:45.121548 | orchestrator | 2026-04-17 04:22:45.121556 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-04-17 04:22:45.121563 | orchestrator | Friday 17 April 2026 04:22:24 +0000 (0:00:03.172) 0:00:04.875 ********** 2026-04-17 04:22:45.121571 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-04-17 04:22:45.121578 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-04-17 04:22:45.121585 | orchestrator | 2026-04-17 04:22:45.121593 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-04-17 04:22:45.121600 | orchestrator | Friday 17 April 2026 04:22:30 +0000 (0:00:06.108) 0:00:10.983 ********** 2026-04-17 04:22:45.121608 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:22:45.121616 | orchestrator | 2026-04-17 04:22:45.121623 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-04-17 04:22:45.121631 | orchestrator | Friday 17 April 2026 04:22:33 +0000 (0:00:02.733) 0:00:13.716 ********** 2026-04-17 04:22:45.121638 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:22:45.121646 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-04-17 04:22:45.121653 | orchestrator | 2026-04-17 04:22:45.121661 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-04-17 04:22:45.121668 | orchestrator | Friday 17 April 2026 04:22:37 +0000 (0:00:03.896) 0:00:17.613 ********** 2026-04-17 04:22:45.121698 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:22:45.121706 | orchestrator | 2026-04-17 04:22:45.121714 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-04-17 04:22:45.121721 | orchestrator | Friday 17 April 2026 04:22:40 +0000 (0:00:03.079) 0:00:20.693 ********** 2026-04-17 04:22:45.121734 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-04-17 04:22:45.121745 | orchestrator | 2026-04-17 04:22:45.121756 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-17 04:22:45.121790 | orchestrator | Friday 17 April 2026 04:22:43 +0000 (0:00:03.579) 0:00:24.273 ********** 2026-04-17 04:22:45.121809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:45.121847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:45.121862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:45.121875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:45.121906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:45.121928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:48.930290 | orchestrator | 2026-04-17 04:22:48.930420 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-17 04:22:48.930446 | orchestrator | Friday 17 April 2026 04:22:45 +0000 (0:00:01.295) 0:00:25.569 ********** 2026-04-17 04:22:48.930467 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:22:48.930479 | orchestrator | 2026-04-17 04:22:48.930489 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-17 04:22:48.930499 | orchestrator | Friday 17 April 2026 04:22:45 +0000 (0:00:00.767) 0:00:26.336 ********** 2026-04-17 04:22:48.930512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:48.930526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:48.930583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:48.930613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:48.930625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:48.930635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:48.930653 | orchestrator | 2026-04-17 04:22:48.930663 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-17 04:22:48.930672 | orchestrator | Friday 17 April 2026 04:22:48 +0000 (0:00:02.418) 0:00:28.754 ********** 2026-04-17 04:22:48.930687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 04:22:48.930698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 04:22:48.930708 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:22:48.930727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.218835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.218944 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:22:50.218972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.218980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.218986 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:22:50.218993 | orchestrator | 2026-04-17 04:22:50.219000 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-17 04:22:50.219008 | orchestrator | Friday 17 April 2026 04:22:48 +0000 (0:00:00.630) 0:00:29.385 ********** 2026-04-17 04:22:50.219014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.219069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.219080 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:22:50.219088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.219092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.219096 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:22:50.219100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 04:22:50.219109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 04:22:59.001491 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:22:59.001589 | orchestrator | 2026-04-17 04:22:59.001602 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-17 04:22:59.001608 | orchestrator | Friday 17 April 2026 04:22:50 +0000 (0:00:01.280) 0:00:30.665 ********** 2026-04-17 04:22:59.001628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:59.001634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:59.001639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:59.001662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:59.001679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:59.001687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:59.001691 | orchestrator | 2026-04-17 04:22:59.001695 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-17 04:22:59.001698 | orchestrator | Friday 17 April 2026 04:22:52 +0000 (0:00:02.482) 0:00:33.148 ********** 2026-04-17 04:22:59.001702 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-17 04:22:59.001706 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-17 04:22:59.001710 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-17 04:22:59.001714 | orchestrator | 2026-04-17 04:22:59.001717 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-17 04:22:59.001721 | orchestrator | Friday 17 April 2026 04:22:54 +0000 (0:00:01.585) 0:00:34.733 ********** 2026-04-17 04:22:59.001725 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-17 04:22:59.001729 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-17 04:22:59.001732 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-17 04:22:59.001740 | orchestrator | 2026-04-17 04:22:59.001744 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-17 04:22:59.001748 | orchestrator | Friday 17 April 2026 04:22:56 +0000 (0:00:02.301) 0:00:37.034 ********** 2026-04-17 04:22:59.001752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:22:59.001761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.203485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.203621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.203686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.203709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.203731 | orchestrator | 2026-04-17 04:23:01.203745 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-17 04:23:01.203766 | orchestrator | Friday 17 April 2026 04:22:58 +0000 (0:00:02.422) 0:00:39.457 ********** 2026-04-17 04:23:01.203782 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:23:01.203812 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:23:01.203831 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:23:01.203848 | orchestrator | 2026-04-17 04:23:01.203891 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-04-17 04:23:01.203911 | orchestrator | Friday 17 April 2026 04:22:59 +0000 (0:00:00.353) 0:00:39.810 ********** 2026-04-17 04:23:01.203942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.203966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.204002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.204023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:01.204056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:28.789818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 04:23:28.789949 | orchestrator | 2026-04-17 04:23:28.789964 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-04-17 04:23:28.789975 | orchestrator | Friday 17 April 2026 04:23:01 +0000 (0:00:01.841) 0:00:41.652 ********** 2026-04-17 04:23:28.789984 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:23:28.789995 | orchestrator | 2026-04-17 04:23:28.790003 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-04-17 04:23:28.790128 | orchestrator | Friday 17 April 2026 04:23:03 +0000 (0:00:02.117) 0:00:43.769 ********** 2026-04-17 04:23:28.790219 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:23:28.790235 | orchestrator | 2026-04-17 04:23:28.790247 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-04-17 04:23:28.790261 | orchestrator | Friday 17 April 2026 04:23:05 +0000 (0:00:02.139) 0:00:45.909 ********** 2026-04-17 04:23:28.790277 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:23:28.790291 | orchestrator | 2026-04-17 04:23:28.790304 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-17 04:23:28.790318 | orchestrator | Friday 17 April 2026 04:23:12 +0000 (0:00:06.851) 0:00:52.760 ********** 2026-04-17 04:23:28.790334 | orchestrator | 2026-04-17 04:23:28.790348 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-17 04:23:28.790362 | orchestrator | Friday 17 April 2026 04:23:12 +0000 (0:00:00.068) 0:00:52.829 ********** 2026-04-17 04:23:28.790377 | orchestrator | 2026-04-17 04:23:28.790394 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-17 04:23:28.790411 | orchestrator | Friday 17 April 2026 04:23:12 +0000 (0:00:00.076) 0:00:52.905 ********** 2026-04-17 04:23:28.790428 | orchestrator | 2026-04-17 04:23:28.790445 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-17 04:23:28.790462 | orchestrator | Friday 17 April 2026 04:23:12 +0000 (0:00:00.072) 0:00:52.978 ********** 2026-04-17 04:23:28.790479 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:23:28.790497 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:23:28.790514 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:23:28.790531 | orchestrator | 2026-04-17 04:23:28.790548 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-17 04:23:28.790561 | orchestrator | Friday 17 April 2026 04:23:18 +0000 (0:00:06.356) 0:00:59.335 ********** 2026-04-17 04:23:28.790572 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:23:28.790584 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:23:28.790594 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:23:28.790605 | orchestrator | 2026-04-17 04:23:28.790616 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:23:28.790629 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 04:23:28.790642 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 04:23:28.790653 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 04:23:28.790664 | orchestrator | 2026-04-17 04:23:28.790675 | orchestrator | 2026-04-17 04:23:28.790686 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:23:28.790698 | orchestrator | Friday 17 April 2026 04:23:28 +0000 (0:00:09.529) 0:01:08.865 ********** 2026-04-17 04:23:28.790708 | orchestrator | =============================================================================== 2026-04-17 04:23:28.790720 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.53s 2026-04-17 04:23:28.790749 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 6.85s 2026-04-17 04:23:28.790866 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.36s 2026-04-17 04:23:28.790883 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.11s 2026-04-17 04:23:28.790900 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.90s 2026-04-17 04:23:28.790934 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.58s 2026-04-17 04:23:28.790950 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.17s 2026-04-17 04:23:28.790967 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.08s 2026-04-17 04:23:28.791008 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 2.73s 2026-04-17 04:23:28.791024 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.48s 2026-04-17 04:23:28.791041 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.42s 2026-04-17 04:23:28.791057 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.42s 2026-04-17 04:23:28.791073 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.30s 2026-04-17 04:23:28.791089 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.14s 2026-04-17 04:23:28.791104 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.12s 2026-04-17 04:23:28.791120 | orchestrator | skyline : Check skyline container --------------------------------------- 1.84s 2026-04-17 04:23:28.791167 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.59s 2026-04-17 04:23:28.791185 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.30s 2026-04-17 04:23:28.791201 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.28s 2026-04-17 04:23:28.791217 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.77s 2026-04-17 04:23:31.217874 | orchestrator | 2026-04-17 04:23:31 | INFO  | Task ff89b5b6-8065-4b51-a8d5-b3c3a827e744 (glance) was prepared for execution. 2026-04-17 04:23:31.218091 | orchestrator | 2026-04-17 04:23:31 | INFO  | It takes a moment until task ff89b5b6-8065-4b51-a8d5-b3c3a827e744 (glance) has been started and output is visible here. 2026-04-17 04:24:02.382674 | orchestrator | 2026-04-17 04:24:02.382748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:24:02.382756 | orchestrator | 2026-04-17 04:24:02.382761 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:24:02.382766 | orchestrator | Friday 17 April 2026 04:23:35 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-04-17 04:24:02.382770 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:24:02.382775 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:24:02.382779 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:24:02.382783 | orchestrator | 2026-04-17 04:24:02.382787 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:24:02.382791 | orchestrator | Friday 17 April 2026 04:23:35 +0000 (0:00:00.314) 0:00:00.574 ********** 2026-04-17 04:24:02.382795 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-17 04:24:02.382800 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-17 04:24:02.382804 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-17 04:24:02.382808 | orchestrator | 2026-04-17 04:24:02.382812 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-17 04:24:02.382816 | orchestrator | 2026-04-17 04:24:02.382819 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 04:24:02.382823 | orchestrator | Friday 17 April 2026 04:23:36 +0000 (0:00:00.438) 0:00:01.013 ********** 2026-04-17 04:24:02.382827 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:24:02.382848 | orchestrator | 2026-04-17 04:24:02.382852 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-17 04:24:02.382856 | orchestrator | Friday 17 April 2026 04:23:36 +0000 (0:00:00.567) 0:00:01.581 ********** 2026-04-17 04:24:02.382860 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-17 04:24:02.382863 | orchestrator | 2026-04-17 04:24:02.382867 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-17 04:24:02.382871 | orchestrator | Friday 17 April 2026 04:23:39 +0000 (0:00:03.258) 0:00:04.839 ********** 2026-04-17 04:24:02.382875 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-17 04:24:02.382879 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-17 04:24:02.382883 | orchestrator | 2026-04-17 04:24:02.382886 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-17 04:24:02.382890 | orchestrator | Friday 17 April 2026 04:23:45 +0000 (0:00:05.671) 0:00:10.510 ********** 2026-04-17 04:24:02.382894 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:24:02.382899 | orchestrator | 2026-04-17 04:24:02.382903 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-17 04:24:02.382907 | orchestrator | Friday 17 April 2026 04:23:48 +0000 (0:00:02.821) 0:00:13.332 ********** 2026-04-17 04:24:02.382911 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:24:02.382915 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-17 04:24:02.382919 | orchestrator | 2026-04-17 04:24:02.382922 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-17 04:24:02.382926 | orchestrator | Friday 17 April 2026 04:23:51 +0000 (0:00:03.310) 0:00:16.643 ********** 2026-04-17 04:24:02.382930 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:24:02.382934 | orchestrator | 2026-04-17 04:24:02.382938 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-17 04:24:02.382942 | orchestrator | Friday 17 April 2026 04:23:54 +0000 (0:00:02.761) 0:00:19.404 ********** 2026-04-17 04:24:02.382946 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-17 04:24:02.382949 | orchestrator | 2026-04-17 04:24:02.382964 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-17 04:24:02.382968 | orchestrator | Friday 17 April 2026 04:23:58 +0000 (0:00:03.626) 0:00:23.031 ********** 2026-04-17 04:24:02.382986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:02.382996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:02.383004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:02.383009 | orchestrator | 2026-04-17 04:24:02.383013 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 04:24:02.383016 | orchestrator | Friday 17 April 2026 04:24:01 +0000 (0:00:03.500) 0:00:26.532 ********** 2026-04-17 04:24:02.383021 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:24:02.383025 | orchestrator | 2026-04-17 04:24:02.383031 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-17 04:24:17.733739 | orchestrator | Friday 17 April 2026 04:24:02 +0000 (0:00:00.720) 0:00:27.252 ********** 2026-04-17 04:24:17.733857 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:24:17.733876 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:24:17.733891 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:24:17.733904 | orchestrator | 2026-04-17 04:24:17.733920 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-17 04:24:17.733933 | orchestrator | Friday 17 April 2026 04:24:05 +0000 (0:00:03.557) 0:00:30.810 ********** 2026-04-17 04:24:17.733946 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:24:17.733960 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:24:17.733972 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:24:17.733985 | orchestrator | 2026-04-17 04:24:17.733997 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-17 04:24:17.734010 | orchestrator | Friday 17 April 2026 04:24:07 +0000 (0:00:01.538) 0:00:32.348 ********** 2026-04-17 04:24:17.734091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:24:17.734107 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:24:17.734121 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:24:17.734135 | orchestrator | 2026-04-17 04:24:17.734150 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-17 04:24:17.734165 | orchestrator | Friday 17 April 2026 04:24:08 +0000 (0:00:01.352) 0:00:33.700 ********** 2026-04-17 04:24:17.734179 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:24:17.734195 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:24:17.734209 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:24:17.734223 | orchestrator | 2026-04-17 04:24:17.734237 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-17 04:24:17.734252 | orchestrator | Friday 17 April 2026 04:24:09 +0000 (0:00:00.679) 0:00:34.379 ********** 2026-04-17 04:24:17.734293 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:17.734306 | orchestrator | 2026-04-17 04:24:17.734320 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-17 04:24:17.734335 | orchestrator | Friday 17 April 2026 04:24:09 +0000 (0:00:00.138) 0:00:34.518 ********** 2026-04-17 04:24:17.734349 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:17.734364 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:17.734378 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:17.734392 | orchestrator | 2026-04-17 04:24:17.734405 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 04:24:17.734419 | orchestrator | Friday 17 April 2026 04:24:09 +0000 (0:00:00.303) 0:00:34.821 ********** 2026-04-17 04:24:17.734433 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:24:17.734448 | orchestrator | 2026-04-17 04:24:17.734462 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-17 04:24:17.734476 | orchestrator | Friday 17 April 2026 04:24:10 +0000 (0:00:00.776) 0:00:35.598 ********** 2026-04-17 04:24:17.734514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:17.734584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:17.734608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:17.734634 | orchestrator | 2026-04-17 04:24:17.734648 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-17 04:24:17.734662 | orchestrator | Friday 17 April 2026 04:24:14 +0000 (0:00:03.835) 0:00:39.434 ********** 2026-04-17 04:24:17.734687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 04:24:21.481781 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:21.481910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 04:24:21.481957 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:21.481973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 04:24:21.481987 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:21.481999 | orchestrator | 2026-04-17 04:24:21.482066 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-17 04:24:21.482082 | orchestrator | Friday 17 April 2026 04:24:17 +0000 (0:00:03.165) 0:00:42.599 ********** 2026-04-17 04:24:21.482118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 04:24:21.482143 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:21.482164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 04:24:21.482178 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:21.482202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 04:24:56.873969 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:56.874100 | orchestrator | 2026-04-17 04:24:56.874111 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-17 04:24:56.874120 | orchestrator | Friday 17 April 2026 04:24:21 +0000 (0:00:03.749) 0:00:46.349 ********** 2026-04-17 04:24:56.874127 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:56.874134 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:56.874140 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:56.874167 | orchestrator | 2026-04-17 04:24:56.874173 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-17 04:24:56.874179 | orchestrator | Friday 17 April 2026 04:24:25 +0000 (0:00:03.603) 0:00:49.952 ********** 2026-04-17 04:24:56.874202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:56.874211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:56.874237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:24:56.874249 | orchestrator | 2026-04-17 04:24:56.874256 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-17 04:24:56.874262 | orchestrator | Friday 17 April 2026 04:24:29 +0000 (0:00:04.024) 0:00:53.976 ********** 2026-04-17 04:24:56.874269 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:24:56.874275 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:24:56.874282 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:24:56.874288 | orchestrator | 2026-04-17 04:24:56.874295 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-17 04:24:56.874301 | orchestrator | Friday 17 April 2026 04:24:34 +0000 (0:00:05.906) 0:00:59.883 ********** 2026-04-17 04:24:56.874307 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:56.874314 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:56.874320 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:56.874327 | orchestrator | 2026-04-17 04:24:56.874333 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-17 04:24:56.874339 | orchestrator | Friday 17 April 2026 04:24:38 +0000 (0:00:03.793) 0:01:03.677 ********** 2026-04-17 04:24:56.874393 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:56.874400 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:56.874406 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:56.874412 | orchestrator | 2026-04-17 04:24:56.874418 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-17 04:24:56.874424 | orchestrator | Friday 17 April 2026 04:24:42 +0000 (0:00:03.493) 0:01:07.171 ********** 2026-04-17 04:24:56.874430 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:56.874436 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:56.874442 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:56.874448 | orchestrator | 2026-04-17 04:24:56.874454 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-17 04:24:56.874460 | orchestrator | Friday 17 April 2026 04:24:45 +0000 (0:00:03.390) 0:01:10.561 ********** 2026-04-17 04:24:56.874467 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:56.874473 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:56.874479 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:56.874484 | orchestrator | 2026-04-17 04:24:56.874490 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-17 04:24:56.874508 | orchestrator | Friday 17 April 2026 04:24:49 +0000 (0:00:03.501) 0:01:14.063 ********** 2026-04-17 04:24:56.874521 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:56.874526 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:56.874530 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:56.874534 | orchestrator | 2026-04-17 04:24:56.874544 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-17 04:24:56.874548 | orchestrator | Friday 17 April 2026 04:24:49 +0000 (0:00:00.458) 0:01:14.522 ********** 2026-04-17 04:24:56.874553 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 04:24:56.874559 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:24:56.874563 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 04:24:56.874568 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:24:56.874572 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 04:24:56.874576 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:24:56.874581 | orchestrator | 2026-04-17 04:24:56.874585 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-17 04:24:56.874590 | orchestrator | Friday 17 April 2026 04:24:52 +0000 (0:00:02.815) 0:01:17.338 ********** 2026-04-17 04:24:56.874594 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:24:56.874599 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:24:56.874603 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:24:56.874608 | orchestrator | 2026-04-17 04:24:56.874612 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-17 04:24:56.874622 | orchestrator | Friday 17 April 2026 04:24:56 +0000 (0:00:04.404) 0:01:21.742 ********** 2026-04-17 04:26:04.409246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:26:04.409359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:26:04.409413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 04:26:04.409423 | orchestrator | 2026-04-17 04:26:04.409431 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 04:26:04.409438 | orchestrator | Friday 17 April 2026 04:25:00 +0000 (0:00:03.844) 0:01:25.587 ********** 2026-04-17 04:26:04.409445 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:26:04.409452 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:26:04.409459 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:26:04.409465 | orchestrator | 2026-04-17 04:26:04.409471 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-17 04:26:04.409477 | orchestrator | Friday 17 April 2026 04:25:01 +0000 (0:00:00.482) 0:01:26.070 ********** 2026-04-17 04:26:04.409484 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:26:04.409555 | orchestrator | 2026-04-17 04:26:04.409566 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-17 04:26:04.409576 | orchestrator | Friday 17 April 2026 04:25:03 +0000 (0:00:01.840) 0:01:27.910 ********** 2026-04-17 04:26:04.409586 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:26:04.409596 | orchestrator | 2026-04-17 04:26:04.409606 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-17 04:26:04.409617 | orchestrator | Friday 17 April 2026 04:25:05 +0000 (0:00:02.120) 0:01:30.031 ********** 2026-04-17 04:26:04.409627 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:26:04.409637 | orchestrator | 2026-04-17 04:26:04.409648 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-17 04:26:04.409672 | orchestrator | Friday 17 April 2026 04:25:07 +0000 (0:00:02.047) 0:01:32.079 ********** 2026-04-17 04:26:04.409683 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:26:04.409693 | orchestrator | 2026-04-17 04:26:04.409703 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-17 04:26:04.409713 | orchestrator | Friday 17 April 2026 04:25:33 +0000 (0:00:26.291) 0:01:58.371 ********** 2026-04-17 04:26:04.409724 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:26:04.409734 | orchestrator | 2026-04-17 04:26:04.409744 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 04:26:04.409754 | orchestrator | Friday 17 April 2026 04:25:35 +0000 (0:00:01.985) 0:02:00.356 ********** 2026-04-17 04:26:04.409766 | orchestrator | 2026-04-17 04:26:04.409777 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 04:26:04.409788 | orchestrator | Friday 17 April 2026 04:25:35 +0000 (0:00:00.076) 0:02:00.432 ********** 2026-04-17 04:26:04.409798 | orchestrator | 2026-04-17 04:26:04.409809 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 04:26:04.409821 | orchestrator | Friday 17 April 2026 04:25:35 +0000 (0:00:00.073) 0:02:00.506 ********** 2026-04-17 04:26:04.409832 | orchestrator | 2026-04-17 04:26:04.409843 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-17 04:26:04.409854 | orchestrator | Friday 17 April 2026 04:25:35 +0000 (0:00:00.072) 0:02:00.578 ********** 2026-04-17 04:26:04.409865 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:26:04.409876 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:26:04.409887 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:26:04.409897 | orchestrator | 2026-04-17 04:26:04.409907 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:26:04.409919 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 04:26:04.409932 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 04:26:04.409942 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 04:26:04.409953 | orchestrator | 2026-04-17 04:26:04.409964 | orchestrator | 2026-04-17 04:26:04.409975 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:26:04.409985 | orchestrator | Friday 17 April 2026 04:26:04 +0000 (0:00:28.691) 0:02:29.270 ********** 2026-04-17 04:26:04.409996 | orchestrator | =============================================================================== 2026-04-17 04:26:04.410007 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.69s 2026-04-17 04:26:04.410084 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.29s 2026-04-17 04:26:04.410097 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.91s 2026-04-17 04:26:04.410116 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.67s 2026-04-17 04:26:04.762607 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.40s 2026-04-17 04:26:04.762685 | orchestrator | glance : Copying over config.json files for services -------------------- 4.02s 2026-04-17 04:26:04.762693 | orchestrator | glance : Check glance containers ---------------------------------------- 3.84s 2026-04-17 04:26:04.762699 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.84s 2026-04-17 04:26:04.762705 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.79s 2026-04-17 04:26:04.762711 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.75s 2026-04-17 04:26:04.762716 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.63s 2026-04-17 04:26:04.762722 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.60s 2026-04-17 04:26:04.762771 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.56s 2026-04-17 04:26:04.762786 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.50s 2026-04-17 04:26:04.762792 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.50s 2026-04-17 04:26:04.762797 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.49s 2026-04-17 04:26:04.762803 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.39s 2026-04-17 04:26:04.762808 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.31s 2026-04-17 04:26:04.762814 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.26s 2026-04-17 04:26:04.762819 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.17s 2026-04-17 04:26:07.200940 | orchestrator | 2026-04-17 04:26:07 | INFO  | Task e999445b-ee45-490d-8e55-1e5af444508c (cinder) was prepared for execution. 2026-04-17 04:26:07.201043 | orchestrator | 2026-04-17 04:26:07 | INFO  | It takes a moment until task e999445b-ee45-490d-8e55-1e5af444508c (cinder) has been started and output is visible here. 2026-04-17 04:26:40.646071 | orchestrator | 2026-04-17 04:26:40.646170 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:26:40.646183 | orchestrator | 2026-04-17 04:26:40.646193 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:26:40.646201 | orchestrator | Friday 17 April 2026 04:26:11 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-04-17 04:26:40.646209 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:26:40.646217 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:26:40.646223 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:26:40.646228 | orchestrator | 2026-04-17 04:26:40.646234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:26:40.646241 | orchestrator | Friday 17 April 2026 04:26:11 +0000 (0:00:00.360) 0:00:00.612 ********** 2026-04-17 04:26:40.646248 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-17 04:26:40.646255 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-17 04:26:40.646261 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-17 04:26:40.646267 | orchestrator | 2026-04-17 04:26:40.646273 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-17 04:26:40.646279 | orchestrator | 2026-04-17 04:26:40.646286 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 04:26:40.646293 | orchestrator | Friday 17 April 2026 04:26:12 +0000 (0:00:00.408) 0:00:01.020 ********** 2026-04-17 04:26:40.646300 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:26:40.646307 | orchestrator | 2026-04-17 04:26:40.646312 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-17 04:26:40.646318 | orchestrator | Friday 17 April 2026 04:26:12 +0000 (0:00:00.530) 0:00:01.551 ********** 2026-04-17 04:26:40.646325 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-17 04:26:40.646330 | orchestrator | 2026-04-17 04:26:40.646337 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-17 04:26:40.646345 | orchestrator | Friday 17 April 2026 04:26:15 +0000 (0:00:03.050) 0:00:04.601 ********** 2026-04-17 04:26:40.646352 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-17 04:26:40.646359 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-17 04:26:40.646365 | orchestrator | 2026-04-17 04:26:40.646371 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-17 04:26:40.646378 | orchestrator | Friday 17 April 2026 04:26:21 +0000 (0:00:06.005) 0:00:10.607 ********** 2026-04-17 04:26:40.646384 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:26:40.646415 | orchestrator | 2026-04-17 04:26:40.646422 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-17 04:26:40.646427 | orchestrator | Friday 17 April 2026 04:26:24 +0000 (0:00:02.920) 0:00:13.527 ********** 2026-04-17 04:26:40.646433 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:26:40.646441 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-17 04:26:40.646447 | orchestrator | 2026-04-17 04:26:40.646453 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-17 04:26:40.646459 | orchestrator | Friday 17 April 2026 04:26:28 +0000 (0:00:04.059) 0:00:17.587 ********** 2026-04-17 04:26:40.646465 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:26:40.646472 | orchestrator | 2026-04-17 04:26:40.646478 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-17 04:26:40.646485 | orchestrator | Friday 17 April 2026 04:26:31 +0000 (0:00:03.066) 0:00:20.654 ********** 2026-04-17 04:26:40.646489 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-17 04:26:40.646493 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-17 04:26:40.646497 | orchestrator | 2026-04-17 04:26:40.646501 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-17 04:26:40.646504 | orchestrator | Friday 17 April 2026 04:26:38 +0000 (0:00:06.873) 0:00:27.528 ********** 2026-04-17 04:26:40.646523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:26:40.646545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:26:40.646549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:26:40.646588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:40.646595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:40.646603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:40.646609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:40.646618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:46.308524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:46.308803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:46.308828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:46.308851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:46.308860 | orchestrator | 2026-04-17 04:26:46.308870 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 04:26:46.308880 | orchestrator | Friday 17 April 2026 04:26:40 +0000 (0:00:01.952) 0:00:29.480 ********** 2026-04-17 04:26:46.308889 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:26:46.308897 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:26:46.308905 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:26:46.308913 | orchestrator | 2026-04-17 04:26:46.308921 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 04:26:46.308929 | orchestrator | Friday 17 April 2026 04:26:41 +0000 (0:00:00.514) 0:00:29.995 ********** 2026-04-17 04:26:46.308937 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:26:46.308945 | orchestrator | 2026-04-17 04:26:46.308953 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-17 04:26:46.308961 | orchestrator | Friday 17 April 2026 04:26:41 +0000 (0:00:00.537) 0:00:30.532 ********** 2026-04-17 04:26:46.308969 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-17 04:26:46.308977 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-17 04:26:46.308985 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-17 04:26:46.308993 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-17 04:26:46.309001 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-17 04:26:46.309009 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-17 04:26:46.309023 | orchestrator | 2026-04-17 04:26:46.309031 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-17 04:26:46.309039 | orchestrator | Friday 17 April 2026 04:26:43 +0000 (0:00:01.560) 0:00:32.093 ********** 2026-04-17 04:26:46.309068 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 04:26:46.309080 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 04:26:46.309095 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 04:26:46.309105 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 04:26:46.309120 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 04:26:56.867428 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 04:26:56.867539 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 04:26:56.867570 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 04:26:56.867691 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 04:26:56.867710 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 04:26:56.867769 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 04:26:56.867782 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 04:26:56.867794 | orchestrator | 2026-04-17 04:26:56.867805 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-17 04:26:56.867817 | orchestrator | Friday 17 April 2026 04:26:46 +0000 (0:00:03.235) 0:00:35.328 ********** 2026-04-17 04:26:56.867827 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:26:56.867839 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:26:56.867848 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 04:26:56.867858 | orchestrator | 2026-04-17 04:26:56.867868 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-17 04:26:56.867878 | orchestrator | Friday 17 April 2026 04:26:48 +0000 (0:00:01.473) 0:00:36.802 ********** 2026-04-17 04:26:56.867889 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-17 04:26:56.867900 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-17 04:26:56.867910 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-17 04:26:56.867921 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 04:26:56.867930 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 04:26:56.867942 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 04:26:56.867952 | orchestrator | 2026-04-17 04:26:56.867971 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-17 04:26:56.867983 | orchestrator | Friday 17 April 2026 04:26:50 +0000 (0:00:02.695) 0:00:39.497 ********** 2026-04-17 04:26:56.867994 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-17 04:26:56.868006 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-17 04:26:56.868015 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-17 04:26:56.868024 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-17 04:26:56.868045 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-17 04:26:56.868055 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-17 04:26:56.868067 | orchestrator | 2026-04-17 04:26:56.868077 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-17 04:26:56.868088 | orchestrator | Friday 17 April 2026 04:26:51 +0000 (0:00:01.008) 0:00:40.505 ********** 2026-04-17 04:26:56.868099 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:26:56.868110 | orchestrator | 2026-04-17 04:26:56.868120 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-17 04:26:56.868131 | orchestrator | Friday 17 April 2026 04:26:51 +0000 (0:00:00.136) 0:00:40.642 ********** 2026-04-17 04:26:56.868141 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:26:56.868152 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:26:56.868162 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:26:56.868172 | orchestrator | 2026-04-17 04:26:56.868182 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 04:26:56.868192 | orchestrator | Friday 17 April 2026 04:26:52 +0000 (0:00:00.512) 0:00:41.154 ********** 2026-04-17 04:26:56.868204 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:26:56.868214 | orchestrator | 2026-04-17 04:26:56.868223 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-17 04:26:56.868229 | orchestrator | Friday 17 April 2026 04:26:52 +0000 (0:00:00.575) 0:00:41.729 ********** 2026-04-17 04:26:56.868248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:26:57.738417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:26:57.738530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:26:57.738589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:26:57.738757 | orchestrator | 2026-04-17 04:26:57.738767 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-17 04:26:57.738776 | orchestrator | Friday 17 April 2026 04:26:56 +0000 (0:00:03.967) 0:00:45.696 ********** 2026-04-17 04:26:57.738792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:26:57.851745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.851884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.851900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.851911 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:26:57.851924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:26:57.851935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.851961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.851979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.851989 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:26:57.852005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:26:57.852015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.852025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.852036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:26:57.852046 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:26:57.852061 | orchestrator | 2026-04-17 04:26:57.852072 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-17 04:26:57.852089 | orchestrator | Friday 17 April 2026 04:26:57 +0000 (0:00:00.891) 0:00:46.587 ********** 2026-04-17 04:26:58.413054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:26:58.413164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:26:58.413179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:26:58.413190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:26:58.413200 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:26:58.413212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:26:58.413263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:26:58.413279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:26:58.413288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:26:58.413298 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:26:58.413307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:26:58.413316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:26:58.413332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:27:02.905839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:27:02.905922 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:27:02.905930 | orchestrator | 2026-04-17 04:27:02.905936 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-17 04:27:02.905954 | orchestrator | Friday 17 April 2026 04:26:58 +0000 (0:00:00.861) 0:00:47.449 ********** 2026-04-17 04:27:02.905961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:02.905967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:02.905972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:02.906002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:02.906009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:02.906053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:02.906059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:02.906064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:02.906069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:02.906082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:15.398486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:15.399431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:15.399468 | orchestrator | 2026-04-17 04:27:15.399476 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-17 04:27:15.399484 | orchestrator | Friday 17 April 2026 04:27:02 +0000 (0:00:04.287) 0:00:51.737 ********** 2026-04-17 04:27:15.399489 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-17 04:27:15.399496 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-17 04:27:15.399501 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-17 04:27:15.399507 | orchestrator | 2026-04-17 04:27:15.399512 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-17 04:27:15.399526 | orchestrator | Friday 17 April 2026 04:27:04 +0000 (0:00:01.798) 0:00:53.536 ********** 2026-04-17 04:27:15.399533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:15.399557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:15.399583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:15.399594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:15.399601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:15.399606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:15.399617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:15.399623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:15.399664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:17.779042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:17.779133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:17.779146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:17.779175 | orchestrator | 2026-04-17 04:27:17.779186 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-17 04:27:17.779197 | orchestrator | Friday 17 April 2026 04:27:15 +0000 (0:00:10.703) 0:01:04.239 ********** 2026-04-17 04:27:17.779204 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:27:17.779214 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:27:17.779221 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:27:17.779229 | orchestrator | 2026-04-17 04:27:17.779237 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-17 04:27:17.779244 | orchestrator | Friday 17 April 2026 04:27:16 +0000 (0:00:01.462) 0:01:05.702 ********** 2026-04-17 04:27:17.779254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:27:17.779261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:27:17.779284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:27:17.779290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:27:17.779300 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:27:17.779305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:27:17.779311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:27:17.779316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:27:17.779330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:27:21.246010 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:27:21.246216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 04:27:21.246292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:27:21.246308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 04:27:21.246321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 04:27:21.246333 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:27:21.246345 | orchestrator | 2026-04-17 04:27:21.246358 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-17 04:27:21.246371 | orchestrator | Friday 17 April 2026 04:27:17 +0000 (0:00:00.903) 0:01:06.606 ********** 2026-04-17 04:27:21.246382 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:27:21.246393 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:27:21.246403 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:27:21.246414 | orchestrator | 2026-04-17 04:27:21.246425 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-17 04:27:21.246435 | orchestrator | Friday 17 April 2026 04:27:18 +0000 (0:00:00.571) 0:01:07.177 ********** 2026-04-17 04:27:21.246491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:21.246516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:21.246547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 04:27:21.246569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:21.246591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:21.246613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:27:21.246760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:28:49.893618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:28:49.893705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 04:28:49.893713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:28:49.893718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:28:49.893734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 04:28:49.893758 | orchestrator | 2026-04-17 04:28:49.893766 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 04:28:49.893773 | orchestrator | Friday 17 April 2026 04:27:21 +0000 (0:00:02.905) 0:01:10.082 ********** 2026-04-17 04:28:49.893780 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:28:49.893787 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:28:49.893793 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:28:49.893889 | orchestrator | 2026-04-17 04:28:49.893896 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-17 04:28:49.893902 | orchestrator | Friday 17 April 2026 04:27:21 +0000 (0:00:00.332) 0:01:10.415 ********** 2026-04-17 04:28:49.893909 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:28:49.893916 | orchestrator | 2026-04-17 04:28:49.893931 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-17 04:28:49.893936 | orchestrator | Friday 17 April 2026 04:27:23 +0000 (0:00:02.032) 0:01:12.448 ********** 2026-04-17 04:28:49.893940 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:28:49.893944 | orchestrator | 2026-04-17 04:28:49.893947 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-17 04:28:49.893951 | orchestrator | Friday 17 April 2026 04:27:25 +0000 (0:00:02.205) 0:01:14.654 ********** 2026-04-17 04:28:49.893955 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:28:49.893959 | orchestrator | 2026-04-17 04:28:49.893962 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 04:28:49.893966 | orchestrator | Friday 17 April 2026 04:27:43 +0000 (0:00:17.899) 0:01:32.553 ********** 2026-04-17 04:28:49.893970 | orchestrator | 2026-04-17 04:28:49.893974 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 04:28:49.893977 | orchestrator | Friday 17 April 2026 04:27:43 +0000 (0:00:00.063) 0:01:32.616 ********** 2026-04-17 04:28:49.893981 | orchestrator | 2026-04-17 04:28:49.893985 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 04:28:49.893989 | orchestrator | Friday 17 April 2026 04:27:43 +0000 (0:00:00.062) 0:01:32.679 ********** 2026-04-17 04:28:49.893992 | orchestrator | 2026-04-17 04:28:49.893996 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-17 04:28:49.894000 | orchestrator | Friday 17 April 2026 04:27:43 +0000 (0:00:00.065) 0:01:32.745 ********** 2026-04-17 04:28:49.894004 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:28:49.894007 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:28:49.894011 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:28:49.894046 | orchestrator | 2026-04-17 04:28:49.894050 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-17 04:28:49.894054 | orchestrator | Friday 17 April 2026 04:28:09 +0000 (0:00:25.856) 0:01:58.601 ********** 2026-04-17 04:28:49.894058 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:28:49.894062 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:28:49.894066 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:28:49.894069 | orchestrator | 2026-04-17 04:28:49.894073 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-17 04:28:49.894077 | orchestrator | Friday 17 April 2026 04:28:15 +0000 (0:00:05.204) 0:02:03.805 ********** 2026-04-17 04:28:49.894081 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:28:49.894084 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:28:49.894088 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:28:49.894092 | orchestrator | 2026-04-17 04:28:49.894096 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-17 04:28:49.894099 | orchestrator | Friday 17 April 2026 04:28:38 +0000 (0:00:23.780) 0:02:27.586 ********** 2026-04-17 04:28:49.894103 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:28:49.894107 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:28:49.894111 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:28:49.894114 | orchestrator | 2026-04-17 04:28:49.894118 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-17 04:28:49.894129 | orchestrator | Friday 17 April 2026 04:28:49 +0000 (0:00:10.747) 0:02:38.333 ********** 2026-04-17 04:28:49.894133 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:28:49.894137 | orchestrator | 2026-04-17 04:28:49.894140 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:28:49.894145 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 04:28:49.894150 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 04:28:49.894154 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 04:28:49.894158 | orchestrator | 2026-04-17 04:28:49.894162 | orchestrator | 2026-04-17 04:28:49.894165 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:28:49.894169 | orchestrator | Friday 17 April 2026 04:28:49 +0000 (0:00:00.278) 0:02:38.611 ********** 2026-04-17 04:28:49.894173 | orchestrator | =============================================================================== 2026-04-17 04:28:49.894177 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.86s 2026-04-17 04:28:49.894181 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.78s 2026-04-17 04:28:49.894185 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.90s 2026-04-17 04:28:49.894190 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.75s 2026-04-17 04:28:49.894194 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.70s 2026-04-17 04:28:49.894203 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.87s 2026-04-17 04:28:49.894208 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.01s 2026-04-17 04:28:49.894212 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.20s 2026-04-17 04:28:49.894216 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.29s 2026-04-17 04:28:49.894221 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.06s 2026-04-17 04:28:49.894225 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.97s 2026-04-17 04:28:49.894229 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.24s 2026-04-17 04:28:49.894234 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.07s 2026-04-17 04:28:49.894238 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.05s 2026-04-17 04:28:49.894246 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.92s 2026-04-17 04:28:50.265180 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.91s 2026-04-17 04:28:50.265272 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.70s 2026-04-17 04:28:50.265282 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.21s 2026-04-17 04:28:50.265290 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.03s 2026-04-17 04:28:50.265296 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 1.95s 2026-04-17 04:28:52.316396 | orchestrator | 2026-04-17 04:28:52 | INFO  | Task 22e42798-9eb8-4d4a-9be3-cb4ea9a97493 (barbican) was prepared for execution. 2026-04-17 04:28:52.316528 | orchestrator | 2026-04-17 04:28:52 | INFO  | It takes a moment until task 22e42798-9eb8-4d4a-9be3-cb4ea9a97493 (barbican) has been started and output is visible here. 2026-04-17 04:29:34.030243 | orchestrator | 2026-04-17 04:29:34.030368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:29:34.030384 | orchestrator | 2026-04-17 04:29:34.030395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:29:34.030434 | orchestrator | Friday 17 April 2026 04:28:56 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-04-17 04:29:34.030444 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:29:34.030454 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:29:34.030462 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:29:34.030472 | orchestrator | 2026-04-17 04:29:34.030482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:29:34.030493 | orchestrator | Friday 17 April 2026 04:28:56 +0000 (0:00:00.297) 0:00:00.558 ********** 2026-04-17 04:29:34.030502 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-17 04:29:34.030512 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-17 04:29:34.030522 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-17 04:29:34.030531 | orchestrator | 2026-04-17 04:29:34.030541 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-17 04:29:34.030550 | orchestrator | 2026-04-17 04:29:34.030560 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-17 04:29:34.030569 | orchestrator | Friday 17 April 2026 04:28:57 +0000 (0:00:00.442) 0:00:01.000 ********** 2026-04-17 04:29:34.030579 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:29:34.030589 | orchestrator | 2026-04-17 04:29:34.030599 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-17 04:29:34.030608 | orchestrator | Friday 17 April 2026 04:28:57 +0000 (0:00:00.580) 0:00:01.580 ********** 2026-04-17 04:29:34.030618 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-17 04:29:34.030628 | orchestrator | 2026-04-17 04:29:34.030637 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-17 04:29:34.030646 | orchestrator | Friday 17 April 2026 04:29:01 +0000 (0:00:03.277) 0:00:04.858 ********** 2026-04-17 04:29:34.030655 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-17 04:29:34.030665 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-17 04:29:34.030673 | orchestrator | 2026-04-17 04:29:34.030682 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-17 04:29:34.030691 | orchestrator | Friday 17 April 2026 04:29:07 +0000 (0:00:06.028) 0:00:10.887 ********** 2026-04-17 04:29:34.030701 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:29:34.030710 | orchestrator | 2026-04-17 04:29:34.030720 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-17 04:29:34.030729 | orchestrator | Friday 17 April 2026 04:29:10 +0000 (0:00:03.014) 0:00:13.901 ********** 2026-04-17 04:29:34.030738 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:29:34.030747 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-17 04:29:34.030756 | orchestrator | 2026-04-17 04:29:34.030764 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-17 04:29:34.030773 | orchestrator | Friday 17 April 2026 04:29:13 +0000 (0:00:03.832) 0:00:17.733 ********** 2026-04-17 04:29:34.030782 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:29:34.030791 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-17 04:29:34.030800 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-17 04:29:34.030810 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-17 04:29:34.030819 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-17 04:29:34.030828 | orchestrator | 2026-04-17 04:29:34.030853 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-17 04:29:34.030862 | orchestrator | Friday 17 April 2026 04:29:28 +0000 (0:00:14.925) 0:00:32.658 ********** 2026-04-17 04:29:34.030889 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-17 04:29:34.030906 | orchestrator | 2026-04-17 04:29:34.030915 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-17 04:29:34.030925 | orchestrator | Friday 17 April 2026 04:29:32 +0000 (0:00:03.597) 0:00:36.256 ********** 2026-04-17 04:29:34.030937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:34.030969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:34.030980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:34.030991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:34.031007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:34.031023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:34.031041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:39.649752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:39.649928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:39.649944 | orchestrator | 2026-04-17 04:29:39.649952 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-17 04:29:39.649959 | orchestrator | Friday 17 April 2026 04:29:34 +0000 (0:00:01.536) 0:00:37.793 ********** 2026-04-17 04:29:39.649966 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-17 04:29:39.649972 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-17 04:29:39.649977 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-17 04:29:39.649983 | orchestrator | 2026-04-17 04:29:39.649988 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-17 04:29:39.649994 | orchestrator | Friday 17 April 2026 04:29:35 +0000 (0:00:01.108) 0:00:38.902 ********** 2026-04-17 04:29:39.650000 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:29:39.650005 | orchestrator | 2026-04-17 04:29:39.650011 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-17 04:29:39.650086 | orchestrator | Friday 17 April 2026 04:29:35 +0000 (0:00:00.336) 0:00:39.238 ********** 2026-04-17 04:29:39.650113 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:29:39.650120 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:29:39.650125 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:29:39.650131 | orchestrator | 2026-04-17 04:29:39.650136 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-17 04:29:39.650142 | orchestrator | Friday 17 April 2026 04:29:35 +0000 (0:00:00.320) 0:00:39.559 ********** 2026-04-17 04:29:39.650148 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:29:39.650153 | orchestrator | 2026-04-17 04:29:39.650170 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-17 04:29:39.650176 | orchestrator | Friday 17 April 2026 04:29:36 +0000 (0:00:00.550) 0:00:40.109 ********** 2026-04-17 04:29:39.650183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:39.650203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:39.650209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:39.650216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:39.650232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:39.650238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:39.650244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:39.650255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:41.094784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:41.094939 | orchestrator | 2026-04-17 04:29:41.094954 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-17 04:29:41.094959 | orchestrator | Friday 17 April 2026 04:29:39 +0000 (0:00:03.307) 0:00:43.416 ********** 2026-04-17 04:29:41.094966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:41.095003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:41.095008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:41.095012 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:29:41.095018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:41.095034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:41.095038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:41.095046 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:29:41.095053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:41.095057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:41.095061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:41.095065 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:29:41.095069 | orchestrator | 2026-04-17 04:29:41.095073 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-17 04:29:41.095077 | orchestrator | Friday 17 April 2026 04:29:40 +0000 (0:00:00.607) 0:00:44.024 ********** 2026-04-17 04:29:41.095086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:44.430604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:44.430694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:44.430723 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:29:44.430748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:44.430756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:44.430762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:44.430768 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:29:44.430792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:44.430827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:44.430845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:44.430855 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:29:44.430866 | orchestrator | 2026-04-17 04:29:44.430876 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-17 04:29:44.431070 | orchestrator | Friday 17 April 2026 04:29:41 +0000 (0:00:00.843) 0:00:44.867 ********** 2026-04-17 04:29:44.431084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:44.431093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:44.431119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:54.184271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:54.184371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:54.184381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:54.184388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:54.184397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:54.184422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:54.184429 | orchestrator | 2026-04-17 04:29:54.184436 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-17 04:29:54.184443 | orchestrator | Friday 17 April 2026 04:29:44 +0000 (0:00:03.330) 0:00:48.198 ********** 2026-04-17 04:29:54.184449 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:29:54.184457 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:29:54.184463 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:29:54.184468 | orchestrator | 2026-04-17 04:29:54.184486 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-17 04:29:54.184492 | orchestrator | Friday 17 April 2026 04:29:45 +0000 (0:00:01.578) 0:00:49.776 ********** 2026-04-17 04:29:54.184499 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:29:54.184505 | orchestrator | 2026-04-17 04:29:54.184511 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-17 04:29:54.184516 | orchestrator | Friday 17 April 2026 04:29:46 +0000 (0:00:00.960) 0:00:50.736 ********** 2026-04-17 04:29:54.184522 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:29:54.184528 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:29:54.184533 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:29:54.184539 | orchestrator | 2026-04-17 04:29:54.184545 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-17 04:29:54.184551 | orchestrator | Friday 17 April 2026 04:29:47 +0000 (0:00:00.567) 0:00:51.303 ********** 2026-04-17 04:29:54.184621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:54.184634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:54.184646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:54.184658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:55.069768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:55.069855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:55.069863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:55.069869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:55.069889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:55.069893 | orchestrator | 2026-04-17 04:29:55.069898 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-17 04:29:55.069925 | orchestrator | Friday 17 April 2026 04:29:54 +0000 (0:00:06.648) 0:00:57.952 ********** 2026-04-17 04:29:55.069946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:55.069954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:55.069965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:55.069971 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:29:55.069979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:55.069990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:55.069994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:55.070005 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:29:55.070051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 04:29:57.451188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:29:57.451269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:29:57.451291 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:29:57.451298 | orchestrator | 2026-04-17 04:29:57.451303 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-17 04:29:57.451308 | orchestrator | Friday 17 April 2026 04:29:55 +0000 (0:00:00.880) 0:00:58.833 ********** 2026-04-17 04:29:57.451313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:57.451318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:57.451334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 04:29:57.451342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:57.451352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:57.451356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:57.451360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:57.451364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:57.451368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:29:57.451372 | orchestrator | 2026-04-17 04:29:57.451376 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-17 04:29:57.451382 | orchestrator | Friday 17 April 2026 04:29:57 +0000 (0:00:02.381) 0:01:01.215 ********** 2026-04-17 04:30:40.785240 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:30:40.785342 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:30:40.785354 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:30:40.785361 | orchestrator | 2026-04-17 04:30:40.785370 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-17 04:30:40.785378 | orchestrator | Friday 17 April 2026 04:29:57 +0000 (0:00:00.348) 0:01:01.564 ********** 2026-04-17 04:30:40.785419 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:30:40.785427 | orchestrator | 2026-04-17 04:30:40.785434 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-17 04:30:40.785442 | orchestrator | Friday 17 April 2026 04:29:59 +0000 (0:00:02.049) 0:01:03.613 ********** 2026-04-17 04:30:40.785448 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:30:40.785455 | orchestrator | 2026-04-17 04:30:40.785461 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-17 04:30:40.785468 | orchestrator | Friday 17 April 2026 04:30:02 +0000 (0:00:02.206) 0:01:05.819 ********** 2026-04-17 04:30:40.785475 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:30:40.785482 | orchestrator | 2026-04-17 04:30:40.785489 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 04:30:40.785495 | orchestrator | Friday 17 April 2026 04:30:13 +0000 (0:00:11.941) 0:01:17.760 ********** 2026-04-17 04:30:40.785501 | orchestrator | 2026-04-17 04:30:40.785507 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 04:30:40.785513 | orchestrator | Friday 17 April 2026 04:30:14 +0000 (0:00:00.073) 0:01:17.833 ********** 2026-04-17 04:30:40.785519 | orchestrator | 2026-04-17 04:30:40.785524 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 04:30:40.785530 | orchestrator | Friday 17 April 2026 04:30:14 +0000 (0:00:00.070) 0:01:17.904 ********** 2026-04-17 04:30:40.785537 | orchestrator | 2026-04-17 04:30:40.785544 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-17 04:30:40.785551 | orchestrator | Friday 17 April 2026 04:30:14 +0000 (0:00:00.073) 0:01:17.978 ********** 2026-04-17 04:30:40.785557 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:30:40.785563 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:30:40.785570 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:30:40.785576 | orchestrator | 2026-04-17 04:30:40.785582 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-17 04:30:40.785589 | orchestrator | Friday 17 April 2026 04:30:20 +0000 (0:00:06.120) 0:01:24.098 ********** 2026-04-17 04:30:40.785596 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:30:40.785603 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:30:40.785609 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:30:40.785616 | orchestrator | 2026-04-17 04:30:40.785623 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-17 04:30:40.785630 | orchestrator | Friday 17 April 2026 04:30:30 +0000 (0:00:09.804) 0:01:33.902 ********** 2026-04-17 04:30:40.785636 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:30:40.785642 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:30:40.785648 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:30:40.785655 | orchestrator | 2026-04-17 04:30:40.785662 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:30:40.785670 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 04:30:40.785677 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 04:30:40.785683 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 04:30:40.785689 | orchestrator | 2026-04-17 04:30:40.785696 | orchestrator | 2026-04-17 04:30:40.785702 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:30:40.785710 | orchestrator | Friday 17 April 2026 04:30:40 +0000 (0:00:10.276) 0:01:44.179 ********** 2026-04-17 04:30:40.785716 | orchestrator | =============================================================================== 2026-04-17 04:30:40.785723 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.93s 2026-04-17 04:30:40.785729 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.94s 2026-04-17 04:30:40.785742 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.28s 2026-04-17 04:30:40.785750 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.80s 2026-04-17 04:30:40.785757 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.65s 2026-04-17 04:30:40.785762 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.12s 2026-04-17 04:30:40.785769 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.03s 2026-04-17 04:30:40.785776 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.83s 2026-04-17 04:30:40.785783 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.60s 2026-04-17 04:30:40.785789 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.33s 2026-04-17 04:30:40.785797 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.31s 2026-04-17 04:30:40.785804 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.28s 2026-04-17 04:30:40.785812 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.01s 2026-04-17 04:30:40.785818 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.38s 2026-04-17 04:30:40.785826 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.21s 2026-04-17 04:30:40.785850 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.05s 2026-04-17 04:30:40.785858 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.58s 2026-04-17 04:30:40.785865 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.54s 2026-04-17 04:30:40.785872 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.11s 2026-04-17 04:30:40.785884 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.96s 2026-04-17 04:30:43.166253 | orchestrator | 2026-04-17 04:30:43 | INFO  | Task 3f65b64c-3a0a-47bd-87ad-d53164d5e72b (designate) was prepared for execution. 2026-04-17 04:30:43.166407 | orchestrator | 2026-04-17 04:30:43 | INFO  | It takes a moment until task 3f65b64c-3a0a-47bd-87ad-d53164d5e72b (designate) has been started and output is visible here. 2026-04-17 04:31:13.278540 | orchestrator | 2026-04-17 04:31:13.278645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:31:13.278657 | orchestrator | 2026-04-17 04:31:13.278664 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:31:13.278671 | orchestrator | Friday 17 April 2026 04:30:47 +0000 (0:00:00.261) 0:00:00.261 ********** 2026-04-17 04:31:13.278677 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:31:13.278684 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:31:13.278691 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:31:13.278697 | orchestrator | 2026-04-17 04:31:13.278703 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:31:13.278709 | orchestrator | Friday 17 April 2026 04:30:47 +0000 (0:00:00.282) 0:00:00.544 ********** 2026-04-17 04:31:13.278717 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-17 04:31:13.278724 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-17 04:31:13.278730 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-17 04:31:13.278737 | orchestrator | 2026-04-17 04:31:13.278744 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-17 04:31:13.278751 | orchestrator | 2026-04-17 04:31:13.278757 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 04:31:13.278765 | orchestrator | Friday 17 April 2026 04:30:48 +0000 (0:00:00.395) 0:00:00.940 ********** 2026-04-17 04:31:13.278773 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:31:13.278782 | orchestrator | 2026-04-17 04:31:13.278788 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-17 04:31:13.278814 | orchestrator | Friday 17 April 2026 04:30:48 +0000 (0:00:00.547) 0:00:01.488 ********** 2026-04-17 04:31:13.278820 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-17 04:31:13.278826 | orchestrator | 2026-04-17 04:31:13.278833 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-17 04:31:13.278839 | orchestrator | Friday 17 April 2026 04:30:51 +0000 (0:00:03.133) 0:00:04.621 ********** 2026-04-17 04:31:13.278845 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-17 04:31:13.278852 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-17 04:31:13.278858 | orchestrator | 2026-04-17 04:31:13.278863 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-17 04:31:13.278870 | orchestrator | Friday 17 April 2026 04:30:57 +0000 (0:00:06.121) 0:00:10.742 ********** 2026-04-17 04:31:13.278877 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:31:13.278883 | orchestrator | 2026-04-17 04:31:13.278889 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-17 04:31:13.278895 | orchestrator | Friday 17 April 2026 04:31:00 +0000 (0:00:03.098) 0:00:13.841 ********** 2026-04-17 04:31:13.278901 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:31:13.278907 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-17 04:31:13.278914 | orchestrator | 2026-04-17 04:31:13.278920 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-17 04:31:13.278926 | orchestrator | Friday 17 April 2026 04:31:04 +0000 (0:00:03.905) 0:00:17.747 ********** 2026-04-17 04:31:13.278932 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:31:13.278939 | orchestrator | 2026-04-17 04:31:13.278945 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-17 04:31:13.278951 | orchestrator | Friday 17 April 2026 04:31:07 +0000 (0:00:02.944) 0:00:20.692 ********** 2026-04-17 04:31:13.278958 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-17 04:31:13.278963 | orchestrator | 2026-04-17 04:31:13.278970 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-17 04:31:13.278976 | orchestrator | Friday 17 April 2026 04:31:11 +0000 (0:00:03.498) 0:00:24.190 ********** 2026-04-17 04:31:13.278985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:13.279068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:13.279090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:13.279099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:13.279109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:13.279117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:13.279130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:13.279146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:19.107615 | orchestrator | 2026-04-17 04:31:19.107627 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-17 04:31:19.107639 | orchestrator | Friday 17 April 2026 04:31:14 +0000 (0:00:02.734) 0:00:26.925 ********** 2026-04-17 04:31:19.107648 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:31:19.107659 | orchestrator | 2026-04-17 04:31:19.107669 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-17 04:31:19.107678 | orchestrator | Friday 17 April 2026 04:31:14 +0000 (0:00:00.143) 0:00:27.068 ********** 2026-04-17 04:31:19.107688 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:31:19.107697 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:31:19.107707 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:31:19.107717 | orchestrator | 2026-04-17 04:31:19.107727 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 04:31:19.107736 | orchestrator | Friday 17 April 2026 04:31:14 +0000 (0:00:00.508) 0:00:27.576 ********** 2026-04-17 04:31:19.107747 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:31:19.107757 | orchestrator | 2026-04-17 04:31:19.107766 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-17 04:31:19.107776 | orchestrator | Friday 17 April 2026 04:31:15 +0000 (0:00:00.530) 0:00:28.107 ********** 2026-04-17 04:31:19.107791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:19.107817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:20.841171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:20.841279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:20.841579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:21.712704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:21.712803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:21.712819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:21.712855 | orchestrator | 2026-04-17 04:31:21.712868 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-17 04:31:21.712881 | orchestrator | Friday 17 April 2026 04:31:20 +0000 (0:00:05.647) 0:00:33.755 ********** 2026-04-17 04:31:21.712909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:21.712922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:21.712953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:21.712966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:21.712978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:21.712990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:21.713009 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:31:21.713027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:21.713076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:21.713088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:21.713108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.533648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.533818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.533887 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:31:22.533929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:22.533951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:22.533972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.533992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.534183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.534234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.534256 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:31:22.534277 | orchestrator | 2026-04-17 04:31:22.534299 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-17 04:31:22.534321 | orchestrator | Friday 17 April 2026 04:31:21 +0000 (0:00:00.988) 0:00:34.743 ********** 2026-04-17 04:31:22.534353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:22.534374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:22.534394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.534424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.846925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.847102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.847122 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:31:22.847151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:22.847164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:22.847177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.847189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.847217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.847238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.847249 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:31:22.847266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:22.847278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:22.847290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.847301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:22.847327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:26.994362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:26.994481 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:31:26.994500 | orchestrator | 2026-04-17 04:31:26.994513 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-17 04:31:26.994526 | orchestrator | Friday 17 April 2026 04:31:22 +0000 (0:00:01.016) 0:00:35.760 ********** 2026-04-17 04:31:26.994564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:26.994579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:26.994590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:26.994641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:26.994655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:26.994667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:26.994684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:26.994696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:26.994708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:26.994729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:26.994749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271625 | orchestrator | 2026-04-17 04:31:38.271638 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-17 04:31:38.271650 | orchestrator | Friday 17 April 2026 04:31:28 +0000 (0:00:05.977) 0:00:41.738 ********** 2026-04-17 04:31:38.271668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:38.271682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:38.271694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:38.271713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:38.271735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:46.150945 | orchestrator | 2026-04-17 04:31:46.150956 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-17 04:31:46.150968 | orchestrator | Friday 17 April 2026 04:31:42 +0000 (0:00:13.785) 0:00:55.523 ********** 2026-04-17 04:31:46.150984 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 04:31:50.366312 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 04:31:50.366420 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 04:31:50.366434 | orchestrator | 2026-04-17 04:31:50.366445 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-17 04:31:50.366453 | orchestrator | Friday 17 April 2026 04:31:46 +0000 (0:00:03.542) 0:00:59.066 ********** 2026-04-17 04:31:50.366459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 04:31:50.366464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 04:31:50.366470 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 04:31:50.366475 | orchestrator | 2026-04-17 04:31:50.366480 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-17 04:31:50.366485 | orchestrator | Friday 17 April 2026 04:31:48 +0000 (0:00:02.373) 0:01:01.440 ********** 2026-04-17 04:31:50.366506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:50.366529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:50.366535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:50.366555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:50.366562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:50.366572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:50.366584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:50.366590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:50.366595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:50.366601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:50.366612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:53.102842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:53.102964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:53.102984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:53.102994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:53.103004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:53.103013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:53.103041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:53.103052 | orchestrator | 2026-04-17 04:31:53.103062 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-17 04:31:53.103129 | orchestrator | Friday 17 April 2026 04:31:51 +0000 (0:00:02.847) 0:01:04.287 ********** 2026-04-17 04:31:53.103149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:53.103161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:53.103171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:53.103181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:53.103197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.039994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.040068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.040075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:54.040134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.040140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.040144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.040162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:54.040186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.040191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.040195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:54.040199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:54.040203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:54.040207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:31:54.040216 | orchestrator | 2026-04-17 04:31:54.040221 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 04:31:54.040230 | orchestrator | Friday 17 April 2026 04:31:54 +0000 (0:00:02.662) 0:01:06.950 ********** 2026-04-17 04:31:55.024347 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:31:55.024434 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:31:55.024442 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:31:55.024447 | orchestrator | 2026-04-17 04:31:55.024453 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-17 04:31:55.024460 | orchestrator | Friday 17 April 2026 04:31:54 +0000 (0:00:00.296) 0:01:07.246 ********** 2026-04-17 04:31:55.024481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:55.024493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:55.024503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:55.024513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:55.024523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:55.024564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:55.024571 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:31:55.024580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:55.024586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:55.024591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:55.024596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:55.024601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:55.024614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:58.327526 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:31:58.327661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 04:31:58.327683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 04:31:58.327718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 04:31:58.327732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 04:31:58.327776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 04:31:58.327788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:31:58.327799 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:31:58.327811 | orchestrator | 2026-04-17 04:31:58.327842 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-17 04:31:58.327855 | orchestrator | Friday 17 April 2026 04:31:55 +0000 (0:00:00.798) 0:01:08.045 ********** 2026-04-17 04:31:58.327873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:58.327886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:58.327897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 04:31:58.327917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:31:58.327936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:32:00.107792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 04:32:00.107897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.107914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.107926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.107961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.107975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.108007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.108026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.108038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.108050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.108061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.108081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.108120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:32:00.108134 | orchestrator | 2026-04-17 04:32:00.108147 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 04:32:00.108160 | orchestrator | Friday 17 April 2026 04:31:59 +0000 (0:00:04.630) 0:01:12.675 ********** 2026-04-17 04:32:00.108171 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:32:00.108190 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:33:18.013275 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:33:18.013375 | orchestrator | 2026-04-17 04:33:18.013387 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-17 04:33:18.013396 | orchestrator | Friday 17 April 2026 04:32:00 +0000 (0:00:00.347) 0:01:13.022 ********** 2026-04-17 04:33:18.013404 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-17 04:33:18.013412 | orchestrator | 2026-04-17 04:33:18.013433 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-17 04:33:18.013441 | orchestrator | Friday 17 April 2026 04:32:02 +0000 (0:00:01.992) 0:01:15.015 ********** 2026-04-17 04:33:18.013449 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 04:33:18.013456 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-17 04:33:18.013464 | orchestrator | 2026-04-17 04:33:18.013471 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-17 04:33:18.013479 | orchestrator | Friday 17 April 2026 04:32:04 +0000 (0:00:02.074) 0:01:17.089 ********** 2026-04-17 04:33:18.013486 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:33:18.013494 | orchestrator | 2026-04-17 04:33:18.013501 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 04:33:18.013508 | orchestrator | Friday 17 April 2026 04:32:18 +0000 (0:00:14.830) 0:01:31.919 ********** 2026-04-17 04:33:18.013515 | orchestrator | 2026-04-17 04:33:18.013523 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 04:33:18.013530 | orchestrator | Friday 17 April 2026 04:32:19 +0000 (0:00:00.070) 0:01:31.990 ********** 2026-04-17 04:33:18.013537 | orchestrator | 2026-04-17 04:33:18.013545 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 04:33:18.013552 | orchestrator | Friday 17 April 2026 04:32:19 +0000 (0:00:00.070) 0:01:32.060 ********** 2026-04-17 04:33:18.013577 | orchestrator | 2026-04-17 04:33:18.013585 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-17 04:33:18.013592 | orchestrator | Friday 17 April 2026 04:32:19 +0000 (0:00:00.082) 0:01:32.143 ********** 2026-04-17 04:33:18.013600 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:33:18.013608 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:33:18.013615 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:33:18.013622 | orchestrator | 2026-04-17 04:33:18.013630 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-17 04:33:18.013637 | orchestrator | Friday 17 April 2026 04:32:26 +0000 (0:00:07.573) 0:01:39.716 ********** 2026-04-17 04:33:18.013644 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:33:18.013651 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:33:18.013659 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:33:18.013666 | orchestrator | 2026-04-17 04:33:18.013673 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-17 04:33:18.013681 | orchestrator | Friday 17 April 2026 04:32:37 +0000 (0:00:10.410) 0:01:50.127 ********** 2026-04-17 04:33:18.013688 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:33:18.013695 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:33:18.013702 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:33:18.013709 | orchestrator | 2026-04-17 04:33:18.013717 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-17 04:33:18.013724 | orchestrator | Friday 17 April 2026 04:32:42 +0000 (0:00:05.542) 0:01:55.670 ********** 2026-04-17 04:33:18.013731 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:33:18.013738 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:33:18.013746 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:33:18.013753 | orchestrator | 2026-04-17 04:33:18.013760 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-17 04:33:18.013767 | orchestrator | Friday 17 April 2026 04:32:51 +0000 (0:00:08.667) 0:02:04.337 ********** 2026-04-17 04:33:18.013775 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:33:18.013782 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:33:18.013789 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:33:18.013796 | orchestrator | 2026-04-17 04:33:18.013805 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-17 04:33:18.013815 | orchestrator | Friday 17 April 2026 04:32:59 +0000 (0:00:08.464) 0:02:12.801 ********** 2026-04-17 04:33:18.013823 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:33:18.013832 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:33:18.013840 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:33:18.013848 | orchestrator | 2026-04-17 04:33:18.013857 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-17 04:33:18.013866 | orchestrator | Friday 17 April 2026 04:33:10 +0000 (0:00:10.780) 0:02:23.582 ********** 2026-04-17 04:33:18.013874 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:33:18.013882 | orchestrator | 2026-04-17 04:33:18.013891 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:33:18.013900 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 04:33:18.013911 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 04:33:18.013920 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 04:33:18.013928 | orchestrator | 2026-04-17 04:33:18.013936 | orchestrator | 2026-04-17 04:33:18.013945 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:33:18.013954 | orchestrator | Friday 17 April 2026 04:33:17 +0000 (0:00:06.921) 0:02:30.503 ********** 2026-04-17 04:33:18.013962 | orchestrator | =============================================================================== 2026-04-17 04:33:18.013978 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.83s 2026-04-17 04:33:18.013987 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.79s 2026-04-17 04:33:18.014007 | orchestrator | designate : Restart designate-worker container ------------------------- 10.78s 2026-04-17 04:33:18.014068 | orchestrator | designate : Restart designate-api container ---------------------------- 10.41s 2026-04-17 04:33:18.014078 | orchestrator | designate : Restart designate-producer container ------------------------ 8.67s 2026-04-17 04:33:18.014086 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.46s 2026-04-17 04:33:18.014100 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.57s 2026-04-17 04:33:18.014109 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.92s 2026-04-17 04:33:18.014118 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.12s 2026-04-17 04:33:18.014126 | orchestrator | designate : Copying over config.json files for services ----------------- 5.98s 2026-04-17 04:33:18.014134 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.65s 2026-04-17 04:33:18.014143 | orchestrator | designate : Restart designate-central container ------------------------- 5.54s 2026-04-17 04:33:18.014152 | orchestrator | designate : Check designate containers ---------------------------------- 4.63s 2026-04-17 04:33:18.014161 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.91s 2026-04-17 04:33:18.014168 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.54s 2026-04-17 04:33:18.014176 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.50s 2026-04-17 04:33:18.014183 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.13s 2026-04-17 04:33:18.014206 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.10s 2026-04-17 04:33:18.014214 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 2.94s 2026-04-17 04:33:18.014221 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.85s 2026-04-17 04:33:20.561473 | orchestrator | 2026-04-17 04:33:20 | INFO  | Task 23b302b2-fd80-4765-ab0b-ffd068ff2ec0 (octavia) was prepared for execution. 2026-04-17 04:33:20.561560 | orchestrator | 2026-04-17 04:33:20 | INFO  | It takes a moment until task 23b302b2-fd80-4765-ab0b-ffd068ff2ec0 (octavia) has been started and output is visible here. 2026-04-17 04:35:19.942884 | orchestrator | 2026-04-17 04:35:19.943008 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:35:19.943025 | orchestrator | 2026-04-17 04:35:19.943039 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:35:19.943052 | orchestrator | Friday 17 April 2026 04:33:24 +0000 (0:00:00.254) 0:00:00.254 ********** 2026-04-17 04:35:19.943066 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:19.943080 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:35:19.943093 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:35:19.943101 | orchestrator | 2026-04-17 04:35:19.943109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:35:19.943117 | orchestrator | Friday 17 April 2026 04:33:25 +0000 (0:00:00.312) 0:00:00.567 ********** 2026-04-17 04:35:19.943124 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-17 04:35:19.943133 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-17 04:35:19.943140 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-17 04:35:19.943148 | orchestrator | 2026-04-17 04:35:19.943155 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-17 04:35:19.943163 | orchestrator | 2026-04-17 04:35:19.943171 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 04:35:19.943178 | orchestrator | Friday 17 April 2026 04:33:25 +0000 (0:00:00.465) 0:00:01.032 ********** 2026-04-17 04:35:19.943206 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:35:19.943215 | orchestrator | 2026-04-17 04:35:19.943222 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-17 04:35:19.943229 | orchestrator | Friday 17 April 2026 04:33:26 +0000 (0:00:00.578) 0:00:01.611 ********** 2026-04-17 04:35:19.943237 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-17 04:35:19.943244 | orchestrator | 2026-04-17 04:35:19.943251 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-17 04:35:19.943258 | orchestrator | Friday 17 April 2026 04:33:29 +0000 (0:00:03.199) 0:00:04.810 ********** 2026-04-17 04:35:19.943266 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-17 04:35:19.943328 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-17 04:35:19.943338 | orchestrator | 2026-04-17 04:35:19.943345 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-17 04:35:19.943353 | orchestrator | Friday 17 April 2026 04:33:35 +0000 (0:00:06.039) 0:00:10.849 ********** 2026-04-17 04:35:19.943360 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:35:19.943368 | orchestrator | 2026-04-17 04:35:19.943375 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-17 04:35:19.943382 | orchestrator | Friday 17 April 2026 04:33:38 +0000 (0:00:03.030) 0:00:13.880 ********** 2026-04-17 04:35:19.943390 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:35:19.943399 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-17 04:35:19.943408 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-17 04:35:19.943416 | orchestrator | 2026-04-17 04:35:19.943424 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-17 04:35:19.943433 | orchestrator | Friday 17 April 2026 04:33:46 +0000 (0:00:07.818) 0:00:21.699 ********** 2026-04-17 04:35:19.943441 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:35:19.943449 | orchestrator | 2026-04-17 04:35:19.943458 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-17 04:35:19.943466 | orchestrator | Friday 17 April 2026 04:33:49 +0000 (0:00:03.130) 0:00:24.830 ********** 2026-04-17 04:35:19.943487 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-17 04:35:19.943496 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-17 04:35:19.943503 | orchestrator | 2026-04-17 04:35:19.943510 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-17 04:35:19.943517 | orchestrator | Friday 17 April 2026 04:33:56 +0000 (0:00:06.861) 0:00:31.691 ********** 2026-04-17 04:35:19.943525 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-17 04:35:19.943532 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-17 04:35:19.943539 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-17 04:35:19.943546 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-17 04:35:19.943553 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-17 04:35:19.943560 | orchestrator | 2026-04-17 04:35:19.943568 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 04:35:19.943575 | orchestrator | Friday 17 April 2026 04:34:10 +0000 (0:00:14.842) 0:00:46.534 ********** 2026-04-17 04:35:19.943584 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:35:19.943597 | orchestrator | 2026-04-17 04:35:19.943609 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-17 04:35:19.943621 | orchestrator | Friday 17 April 2026 04:34:11 +0000 (0:00:00.912) 0:00:47.446 ********** 2026-04-17 04:35:19.943641 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.943652 | orchestrator | 2026-04-17 04:35:19.943664 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-17 04:35:19.943675 | orchestrator | Friday 17 April 2026 04:34:16 +0000 (0:00:04.608) 0:00:52.054 ********** 2026-04-17 04:35:19.943686 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.943697 | orchestrator | 2026-04-17 04:35:19.943708 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-17 04:35:19.943742 | orchestrator | Friday 17 April 2026 04:34:20 +0000 (0:00:03.754) 0:00:55.809 ********** 2026-04-17 04:35:19.943756 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:19.943767 | orchestrator | 2026-04-17 04:35:19.943780 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-17 04:35:19.943792 | orchestrator | Friday 17 April 2026 04:34:23 +0000 (0:00:03.082) 0:00:58.891 ********** 2026-04-17 04:35:19.943804 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-17 04:35:19.943816 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-17 04:35:19.943829 | orchestrator | 2026-04-17 04:35:19.943842 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-17 04:35:19.943854 | orchestrator | Friday 17 April 2026 04:34:31 +0000 (0:00:08.460) 0:01:07.353 ********** 2026-04-17 04:35:19.943866 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-17 04:35:19.943878 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-17 04:35:19.943891 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-17 04:35:19.943899 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-17 04:35:19.943907 | orchestrator | 2026-04-17 04:35:19.943914 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-17 04:35:19.943921 | orchestrator | Friday 17 April 2026 04:34:47 +0000 (0:00:16.053) 0:01:23.407 ********** 2026-04-17 04:35:19.943928 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.943940 | orchestrator | 2026-04-17 04:35:19.943947 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-17 04:35:19.943955 | orchestrator | Friday 17 April 2026 04:34:52 +0000 (0:00:04.370) 0:01:27.778 ********** 2026-04-17 04:35:19.943962 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.943969 | orchestrator | 2026-04-17 04:35:19.943976 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-17 04:35:19.943983 | orchestrator | Friday 17 April 2026 04:34:57 +0000 (0:00:04.786) 0:01:32.564 ********** 2026-04-17 04:35:19.943990 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:35:19.943997 | orchestrator | 2026-04-17 04:35:19.944004 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-17 04:35:19.944011 | orchestrator | Friday 17 April 2026 04:34:57 +0000 (0:00:00.250) 0:01:32.815 ********** 2026-04-17 04:35:19.944019 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:19.944026 | orchestrator | 2026-04-17 04:35:19.944033 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 04:35:19.944040 | orchestrator | Friday 17 April 2026 04:35:01 +0000 (0:00:04.183) 0:01:36.998 ********** 2026-04-17 04:35:19.944048 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:35:19.944055 | orchestrator | 2026-04-17 04:35:19.944062 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-17 04:35:19.944069 | orchestrator | Friday 17 April 2026 04:35:02 +0000 (0:00:01.194) 0:01:38.193 ********** 2026-04-17 04:35:19.944076 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:35:19.944083 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.944098 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:35:19.944105 | orchestrator | 2026-04-17 04:35:19.944113 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-17 04:35:19.944120 | orchestrator | Friday 17 April 2026 04:35:07 +0000 (0:00:05.101) 0:01:43.294 ********** 2026-04-17 04:35:19.944127 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.944140 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:35:19.944147 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:35:19.944154 | orchestrator | 2026-04-17 04:35:19.944161 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-17 04:35:19.944169 | orchestrator | Friday 17 April 2026 04:35:12 +0000 (0:00:04.649) 0:01:47.944 ********** 2026-04-17 04:35:19.944176 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.944183 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:35:19.944190 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:35:19.944197 | orchestrator | 2026-04-17 04:35:19.944204 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-17 04:35:19.944215 | orchestrator | Friday 17 April 2026 04:35:13 +0000 (0:00:01.046) 0:01:48.991 ********** 2026-04-17 04:35:19.944226 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:19.944237 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:35:19.944252 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:35:19.944267 | orchestrator | 2026-04-17 04:35:19.944309 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-17 04:35:19.944319 | orchestrator | Friday 17 April 2026 04:35:15 +0000 (0:00:01.945) 0:01:50.936 ********** 2026-04-17 04:35:19.944330 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:35:19.944341 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.944352 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:35:19.944362 | orchestrator | 2026-04-17 04:35:19.944373 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-17 04:35:19.944384 | orchestrator | Friday 17 April 2026 04:35:16 +0000 (0:00:01.264) 0:01:52.200 ********** 2026-04-17 04:35:19.944394 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.944405 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:35:19.944416 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:35:19.944427 | orchestrator | 2026-04-17 04:35:19.944437 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-17 04:35:19.944449 | orchestrator | Friday 17 April 2026 04:35:17 +0000 (0:00:01.161) 0:01:53.362 ********** 2026-04-17 04:35:19.944460 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:35:19.944471 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:19.944482 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:35:19.944493 | orchestrator | 2026-04-17 04:35:19.944514 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-17 04:35:44.619047 | orchestrator | Friday 17 April 2026 04:35:19 +0000 (0:00:02.104) 0:01:55.466 ********** 2026-04-17 04:35:44.619171 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:35:44.619195 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:35:44.619209 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:35:44.619222 | orchestrator | 2026-04-17 04:35:44.619236 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-17 04:35:44.619250 | orchestrator | Friday 17 April 2026 04:35:21 +0000 (0:00:01.483) 0:01:56.949 ********** 2026-04-17 04:35:44.619262 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:44.619277 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:35:44.619290 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:35:44.619394 | orchestrator | 2026-04-17 04:35:44.619410 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-17 04:35:44.619423 | orchestrator | Friday 17 April 2026 04:35:22 +0000 (0:00:00.641) 0:01:57.591 ********** 2026-04-17 04:35:44.619437 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:35:44.619450 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:35:44.619463 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:44.619477 | orchestrator | 2026-04-17 04:35:44.619523 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 04:35:44.619538 | orchestrator | Friday 17 April 2026 04:35:25 +0000 (0:00:02.960) 0:02:00.551 ********** 2026-04-17 04:35:44.619554 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:35:44.619567 | orchestrator | 2026-04-17 04:35:44.619581 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-17 04:35:44.619590 | orchestrator | Friday 17 April 2026 04:35:25 +0000 (0:00:00.552) 0:02:01.103 ********** 2026-04-17 04:35:44.619598 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:44.619606 | orchestrator | 2026-04-17 04:35:44.619614 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-17 04:35:44.619622 | orchestrator | Friday 17 April 2026 04:35:29 +0000 (0:00:03.710) 0:02:04.813 ********** 2026-04-17 04:35:44.619630 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:44.619637 | orchestrator | 2026-04-17 04:35:44.619645 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-17 04:35:44.619653 | orchestrator | Friday 17 April 2026 04:35:32 +0000 (0:00:03.057) 0:02:07.870 ********** 2026-04-17 04:35:44.619662 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-17 04:35:44.619670 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-17 04:35:44.619678 | orchestrator | 2026-04-17 04:35:44.619686 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-17 04:35:44.619694 | orchestrator | Friday 17 April 2026 04:35:38 +0000 (0:00:06.547) 0:02:14.417 ********** 2026-04-17 04:35:44.619702 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:44.619710 | orchestrator | 2026-04-17 04:35:44.619718 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-17 04:35:44.619725 | orchestrator | Friday 17 April 2026 04:35:42 +0000 (0:00:03.271) 0:02:17.689 ********** 2026-04-17 04:35:44.619733 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:35:44.619741 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:35:44.619749 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:35:44.619756 | orchestrator | 2026-04-17 04:35:44.619764 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-17 04:35:44.619772 | orchestrator | Friday 17 April 2026 04:35:42 +0000 (0:00:00.531) 0:02:18.221 ********** 2026-04-17 04:35:44.619797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:44.619828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:44.619846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:44.619855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:44.619863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:44.619871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:44.619885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:44.619894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:44.619916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:46.153216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:46.153401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:46.153429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:46.153472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:35:46.153496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:35:46.153516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:35:46.153563 | orchestrator | 2026-04-17 04:35:46.153586 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-17 04:35:46.153606 | orchestrator | Friday 17 April 2026 04:35:45 +0000 (0:00:02.363) 0:02:20.584 ********** 2026-04-17 04:35:46.153624 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:35:46.153643 | orchestrator | 2026-04-17 04:35:46.153661 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-17 04:35:46.153679 | orchestrator | Friday 17 April 2026 04:35:45 +0000 (0:00:00.155) 0:02:20.739 ********** 2026-04-17 04:35:46.153698 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:35:46.153739 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:35:46.153761 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:35:46.153780 | orchestrator | 2026-04-17 04:35:46.153799 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-17 04:35:46.153819 | orchestrator | Friday 17 April 2026 04:35:45 +0000 (0:00:00.343) 0:02:21.083 ********** 2026-04-17 04:35:46.153842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:46.153864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:46.153895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:46.153917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:46.153950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:46.153970 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:35:46.154004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:50.726545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:50.726631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:50.726658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:50.726668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:50.726697 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:35:50.726707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:50.726716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:50.726738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:50.726745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:50.726753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:50.726760 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:35:50.726767 | orchestrator | 2026-04-17 04:35:50.726781 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 04:35:50.726803 | orchestrator | Friday 17 April 2026 04:35:46 +0000 (0:00:00.690) 0:02:21.773 ********** 2026-04-17 04:35:50.726815 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:35:50.726826 | orchestrator | 2026-04-17 04:35:50.726836 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-17 04:35:50.726847 | orchestrator | Friday 17 April 2026 04:35:47 +0000 (0:00:00.793) 0:02:22.567 ********** 2026-04-17 04:35:50.726859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:50.726870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:50.726890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:52.183573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:52.183694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:52.183733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:52.183747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:35:52.183900 | orchestrator | 2026-04-17 04:35:52.183913 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-17 04:35:52.183925 | orchestrator | Friday 17 April 2026 04:35:51 +0000 (0:00:04.561) 0:02:27.129 ********** 2026-04-17 04:35:52.183947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:52.288582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:52.288715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:52.288730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:52.288742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:52.288752 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:35:52.288764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:52.288775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:52.288802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:52.288833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:52.288843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:52.288851 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:35:52.288861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:52.288870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:52.288879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:52.288902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:53.096000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:53.096103 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:35:53.096122 | orchestrator | 2026-04-17 04:35:53.096136 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-17 04:35:53.096149 | orchestrator | Friday 17 April 2026 04:35:52 +0000 (0:00:00.695) 0:02:27.824 ********** 2026-04-17 04:35:53.096163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:53.096178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:53.096192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:53.096205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:53.096254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:53.096268 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:35:53.096286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:53.096373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:53.096389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:53.096402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:53.096415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:53.096435 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:35:53.096456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 04:35:57.554507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 04:35:57.554621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 04:35:57.554640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 04:35:57.554655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 04:35:57.554694 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:35:57.554710 | orchestrator | 2026-04-17 04:35:57.554724 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-17 04:35:57.554738 | orchestrator | Friday 17 April 2026 04:35:53 +0000 (0:00:01.290) 0:02:29.114 ********** 2026-04-17 04:35:57.554751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:57.554787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:57.554797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:35:57.554804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:57.554811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:57.554824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:35:57.554831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:35:57.554843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:13.539896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:13.540039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:13.540091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:13.540141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:13.540159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:36:13.540176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:36:13.540228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:36:13.540248 | orchestrator | 2026-04-17 04:36:13.540261 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-17 04:36:13.540271 | orchestrator | Friday 17 April 2026 04:35:58 +0000 (0:00:04.964) 0:02:34.079 ********** 2026-04-17 04:36:13.540280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 04:36:13.540291 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 04:36:13.540299 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 04:36:13.540308 | orchestrator | 2026-04-17 04:36:13.540338 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-17 04:36:13.540348 | orchestrator | Friday 17 April 2026 04:36:00 +0000 (0:00:01.597) 0:02:35.677 ********** 2026-04-17 04:36:13.540358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:36:13.540379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:36:13.540390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:36:13.540414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:36:28.744068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:36:28.744248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:36:28.744282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:36:28.744532 | orchestrator | 2026-04-17 04:36:28.744545 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-17 04:36:28.744561 | orchestrator | Friday 17 April 2026 04:36:16 +0000 (0:00:16.712) 0:02:52.390 ********** 2026-04-17 04:36:28.744574 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:36:28.744589 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:36:28.744601 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:36:28.744614 | orchestrator | 2026-04-17 04:36:28.744626 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-17 04:36:28.744639 | orchestrator | Friday 17 April 2026 04:36:18 +0000 (0:00:01.856) 0:02:54.246 ********** 2026-04-17 04:36:28.744652 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 04:36:28.744665 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 04:36:28.744677 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 04:36:28.744690 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 04:36:28.744703 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 04:36:28.744715 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 04:36:28.744728 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 04:36:28.744741 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 04:36:28.744753 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 04:36:28.744766 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 04:36:28.744778 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 04:36:28.744790 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 04:36:28.744803 | orchestrator | 2026-04-17 04:36:28.744816 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-17 04:36:28.744829 | orchestrator | Friday 17 April 2026 04:36:23 +0000 (0:00:04.893) 0:02:59.139 ********** 2026-04-17 04:36:28.744841 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 04:36:28.744865 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 04:36:28.744897 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 04:36:36.883119 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 04:36:36.883236 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 04:36:36.883276 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 04:36:36.883288 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 04:36:36.883299 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 04:36:36.883310 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 04:36:36.883321 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 04:36:36.883461 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 04:36:36.883494 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 04:36:36.883509 | orchestrator | 2026-04-17 04:36:36.883522 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-17 04:36:36.883535 | orchestrator | Friday 17 April 2026 04:36:28 +0000 (0:00:05.129) 0:03:04.269 ********** 2026-04-17 04:36:36.883546 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 04:36:36.883557 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 04:36:36.883568 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 04:36:36.883579 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 04:36:36.883590 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 04:36:36.883601 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 04:36:36.883611 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 04:36:36.883622 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 04:36:36.883632 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 04:36:36.883645 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 04:36:36.883657 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 04:36:36.883670 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 04:36:36.883682 | orchestrator | 2026-04-17 04:36:36.883695 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-17 04:36:36.883707 | orchestrator | Friday 17 April 2026 04:36:33 +0000 (0:00:05.064) 0:03:09.334 ********** 2026-04-17 04:36:36.883724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:36:36.883741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:36:36.883815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 04:36:36.883832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:36:36.883846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:36:36.883859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 04:36:36.883873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:36.883887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:36.883908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 04:36:36.883934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:37:59.630418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:37:59.630604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 04:37:59.630623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:37:59.630633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:37:59.630641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 04:37:59.630673 | orchestrator | 2026-04-17 04:37:59.630684 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 04:37:59.630694 | orchestrator | Friday 17 April 2026 04:36:37 +0000 (0:00:03.903) 0:03:13.237 ********** 2026-04-17 04:37:59.630702 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:37:59.630711 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:37:59.630719 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:37:59.630727 | orchestrator | 2026-04-17 04:37:59.630735 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-17 04:37:59.630743 | orchestrator | Friday 17 April 2026 04:36:38 +0000 (0:00:00.589) 0:03:13.826 ********** 2026-04-17 04:37:59.630762 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.630770 | orchestrator | 2026-04-17 04:37:59.630778 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-17 04:37:59.630786 | orchestrator | Friday 17 April 2026 04:36:40 +0000 (0:00:01.940) 0:03:15.767 ********** 2026-04-17 04:37:59.630794 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.630802 | orchestrator | 2026-04-17 04:37:59.630809 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-17 04:37:59.630817 | orchestrator | Friday 17 April 2026 04:36:42 +0000 (0:00:02.019) 0:03:17.787 ********** 2026-04-17 04:37:59.630825 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.630833 | orchestrator | 2026-04-17 04:37:59.630840 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-17 04:37:59.630849 | orchestrator | Friday 17 April 2026 04:36:44 +0000 (0:00:02.107) 0:03:19.894 ********** 2026-04-17 04:37:59.630872 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.630881 | orchestrator | 2026-04-17 04:37:59.630889 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-17 04:37:59.630897 | orchestrator | Friday 17 April 2026 04:36:46 +0000 (0:00:02.126) 0:03:22.021 ********** 2026-04-17 04:37:59.630905 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.630913 | orchestrator | 2026-04-17 04:37:59.630925 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 04:37:59.630937 | orchestrator | Friday 17 April 2026 04:37:06 +0000 (0:00:20.504) 0:03:42.526 ********** 2026-04-17 04:37:59.630954 | orchestrator | 2026-04-17 04:37:59.630975 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 04:37:59.630988 | orchestrator | Friday 17 April 2026 04:37:07 +0000 (0:00:00.073) 0:03:42.599 ********** 2026-04-17 04:37:59.631000 | orchestrator | 2026-04-17 04:37:59.631013 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 04:37:59.631025 | orchestrator | Friday 17 April 2026 04:37:07 +0000 (0:00:00.092) 0:03:42.691 ********** 2026-04-17 04:37:59.631037 | orchestrator | 2026-04-17 04:37:59.631050 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-17 04:37:59.631063 | orchestrator | Friday 17 April 2026 04:37:07 +0000 (0:00:00.075) 0:03:42.767 ********** 2026-04-17 04:37:59.631075 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.631089 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:37:59.631104 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:37:59.631117 | orchestrator | 2026-04-17 04:37:59.631130 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-17 04:37:59.631142 | orchestrator | Friday 17 April 2026 04:37:23 +0000 (0:00:15.984) 0:03:58.751 ********** 2026-04-17 04:37:59.631155 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.631167 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:37:59.631194 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:37:59.631207 | orchestrator | 2026-04-17 04:37:59.631220 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-17 04:37:59.631229 | orchestrator | Friday 17 April 2026 04:37:33 +0000 (0:00:10.612) 0:04:09.364 ********** 2026-04-17 04:37:59.631238 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.631245 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:37:59.631253 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:37:59.631261 | orchestrator | 2026-04-17 04:37:59.631269 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-17 04:37:59.631277 | orchestrator | Friday 17 April 2026 04:37:44 +0000 (0:00:10.234) 0:04:19.598 ********** 2026-04-17 04:37:59.631284 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.631292 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:37:59.631300 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:37:59.631308 | orchestrator | 2026-04-17 04:37:59.631315 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-17 04:37:59.631323 | orchestrator | Friday 17 April 2026 04:37:54 +0000 (0:00:10.129) 0:04:29.728 ********** 2026-04-17 04:37:59.631331 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:37:59.631339 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:37:59.631347 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:37:59.631354 | orchestrator | 2026-04-17 04:37:59.631362 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:37:59.631371 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 04:37:59.631380 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:37:59.631392 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:37:59.631410 | orchestrator | 2026-04-17 04:37:59.631427 | orchestrator | 2026-04-17 04:37:59.631467 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:37:59.631480 | orchestrator | Friday 17 April 2026 04:37:59 +0000 (0:00:05.415) 0:04:35.144 ********** 2026-04-17 04:37:59.631492 | orchestrator | =============================================================================== 2026-04-17 04:37:59.631503 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.50s 2026-04-17 04:37:59.631516 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.71s 2026-04-17 04:37:59.631528 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.05s 2026-04-17 04:37:59.631541 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.98s 2026-04-17 04:37:59.631554 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.84s 2026-04-17 04:37:59.631568 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.61s 2026-04-17 04:37:59.631579 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.23s 2026-04-17 04:37:59.631601 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.13s 2026-04-17 04:37:59.631611 | orchestrator | octavia : Create security groups for octavia ---------------------------- 8.46s 2026-04-17 04:37:59.631618 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.82s 2026-04-17 04:37:59.631626 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.86s 2026-04-17 04:37:59.631634 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.55s 2026-04-17 04:37:59.631642 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.04s 2026-04-17 04:37:59.631649 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.42s 2026-04-17 04:37:59.631667 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.13s 2026-04-17 04:37:59.881583 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.10s 2026-04-17 04:37:59.881688 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.06s 2026-04-17 04:37:59.881703 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.96s 2026-04-17 04:37:59.881715 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 4.89s 2026-04-17 04:37:59.881726 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 4.79s 2026-04-17 04:38:02.112675 | orchestrator | 2026-04-17 04:38:02 | INFO  | Task a9efafdb-8cb0-4292-b464-0397338b0ef1 (ceilometer) was prepared for execution. 2026-04-17 04:38:02.112762 | orchestrator | 2026-04-17 04:38:02 | INFO  | It takes a moment until task a9efafdb-8cb0-4292-b464-0397338b0ef1 (ceilometer) has been started and output is visible here. 2026-04-17 04:38:24.407922 | orchestrator | 2026-04-17 04:38:24.408026 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:38:24.408040 | orchestrator | 2026-04-17 04:38:24.408051 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:38:24.408061 | orchestrator | Friday 17 April 2026 04:38:06 +0000 (0:00:00.279) 0:00:00.279 ********** 2026-04-17 04:38:24.408071 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:38:24.408082 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:38:24.408092 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:38:24.408102 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:38:24.408112 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:38:24.408121 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:38:24.408131 | orchestrator | 2026-04-17 04:38:24.408140 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:38:24.408150 | orchestrator | Friday 17 April 2026 04:38:07 +0000 (0:00:00.745) 0:00:01.024 ********** 2026-04-17 04:38:24.408160 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-17 04:38:24.408170 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-17 04:38:24.408179 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-17 04:38:24.408189 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-17 04:38:24.408198 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-17 04:38:24.408208 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-17 04:38:24.408217 | orchestrator | 2026-04-17 04:38:24.408227 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-17 04:38:24.408236 | orchestrator | 2026-04-17 04:38:24.408246 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-17 04:38:24.408255 | orchestrator | Friday 17 April 2026 04:38:08 +0000 (0:00:00.644) 0:00:01.669 ********** 2026-04-17 04:38:24.408266 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:38:24.408277 | orchestrator | 2026-04-17 04:38:24.408286 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-04-17 04:38:24.408296 | orchestrator | Friday 17 April 2026 04:38:09 +0000 (0:00:01.133) 0:00:02.802 ********** 2026-04-17 04:38:24.408305 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:24.408315 | orchestrator | 2026-04-17 04:38:24.408325 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-04-17 04:38:24.408334 | orchestrator | Friday 17 April 2026 04:38:09 +0000 (0:00:00.125) 0:00:02.928 ********** 2026-04-17 04:38:24.408343 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:24.408353 | orchestrator | 2026-04-17 04:38:24.408363 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-04-17 04:38:24.408372 | orchestrator | Friday 17 April 2026 04:38:09 +0000 (0:00:00.113) 0:00:03.041 ********** 2026-04-17 04:38:24.408382 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:38:24.408416 | orchestrator | 2026-04-17 04:38:24.408426 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-04-17 04:38:24.408435 | orchestrator | Friday 17 April 2026 04:38:12 +0000 (0:00:03.284) 0:00:06.325 ********** 2026-04-17 04:38:24.408445 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:38:24.408455 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-04-17 04:38:24.408492 | orchestrator | 2026-04-17 04:38:24.408503 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-04-17 04:38:24.408515 | orchestrator | Friday 17 April 2026 04:38:15 +0000 (0:00:03.254) 0:00:09.580 ********** 2026-04-17 04:38:24.408526 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:38:24.408536 | orchestrator | 2026-04-17 04:38:24.408546 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-04-17 04:38:24.408555 | orchestrator | Friday 17 April 2026 04:38:18 +0000 (0:00:02.993) 0:00:12.574 ********** 2026-04-17 04:38:24.408578 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-04-17 04:38:24.408588 | orchestrator | 2026-04-17 04:38:24.408598 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-04-17 04:38:24.408608 | orchestrator | Friday 17 April 2026 04:38:22 +0000 (0:00:03.808) 0:00:16.382 ********** 2026-04-17 04:38:24.408617 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:24.408627 | orchestrator | 2026-04-17 04:38:24.408636 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-17 04:38:24.408646 | orchestrator | Friday 17 April 2026 04:38:22 +0000 (0:00:00.130) 0:00:16.513 ********** 2026-04-17 04:38:24.408658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:24.408690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:24.408702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:24.408712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:24.408733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:24.408744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:24.408755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:24.408772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:29.514520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:29.514642 | orchestrator | 2026-04-17 04:38:29.514660 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-17 04:38:29.514674 | orchestrator | Friday 17 April 2026 04:38:24 +0000 (0:00:01.512) 0:00:18.025 ********** 2026-04-17 04:38:29.514711 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:38:29.514724 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:38:29.514734 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 04:38:29.514745 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 04:38:29.514756 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 04:38:29.514766 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 04:38:29.514777 | orchestrator | 2026-04-17 04:38:29.514788 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-17 04:38:29.514800 | orchestrator | Friday 17 April 2026 04:38:26 +0000 (0:00:01.696) 0:00:19.722 ********** 2026-04-17 04:38:29.514811 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:38:29.514823 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:38:29.514834 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:38:29.514844 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:38:29.514855 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:38:29.514866 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:38:29.514876 | orchestrator | 2026-04-17 04:38:29.514887 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-17 04:38:29.514898 | orchestrator | Friday 17 April 2026 04:38:26 +0000 (0:00:00.740) 0:00:20.463 ********** 2026-04-17 04:38:29.514909 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:29.514920 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:29.514931 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:29.514942 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:29.514953 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:29.514964 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:29.514975 | orchestrator | 2026-04-17 04:38:29.514987 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-17 04:38:29.515000 | orchestrator | Friday 17 April 2026 04:38:27 +0000 (0:00:00.882) 0:00:21.345 ********** 2026-04-17 04:38:29.515012 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:38:29.515025 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:38:29.515036 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:38:29.515049 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:38:29.515109 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:38:29.515132 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:38:29.515149 | orchestrator | 2026-04-17 04:38:29.515160 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-17 04:38:29.515171 | orchestrator | Friday 17 April 2026 04:38:28 +0000 (0:00:00.668) 0:00:22.014 ********** 2026-04-17 04:38:29.515189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:29.515203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:29.515215 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:29.515253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:29.515289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:29.515308 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:29.515327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:29.515347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:29.515374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:29.515394 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:29.515411 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:29.515423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:29.515442 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:29.515463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:34.414970 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:34.415078 | orchestrator | 2026-04-17 04:38:34.415093 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-17 04:38:34.415105 | orchestrator | Friday 17 April 2026 04:38:29 +0000 (0:00:01.114) 0:00:23.129 ********** 2026-04-17 04:38:34.415118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:34.415132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:34.415144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:34.415155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:34.415181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:34.415192 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:34.415202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:34.415231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:34.415242 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:34.415269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:34.415281 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:34.415291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:34.415301 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:34.415316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:34.415326 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:34.415336 | orchestrator | 2026-04-17 04:38:34.415347 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-17 04:38:34.415358 | orchestrator | Friday 17 April 2026 04:38:30 +0000 (0:00:00.832) 0:00:23.961 ********** 2026-04-17 04:38:34.415375 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:38:34.415385 | orchestrator | 2026-04-17 04:38:34.415395 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-17 04:38:34.415405 | orchestrator | Friday 17 April 2026 04:38:31 +0000 (0:00:00.727) 0:00:24.689 ********** 2026-04-17 04:38:34.415415 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:38:34.415426 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:38:34.415435 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:38:34.415445 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:38:34.415455 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:38:34.415464 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:38:34.415474 | orchestrator | 2026-04-17 04:38:34.415508 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-17 04:38:34.415520 | orchestrator | Friday 17 April 2026 04:38:31 +0000 (0:00:00.856) 0:00:25.546 ********** 2026-04-17 04:38:34.415531 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:38:34.415542 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:38:34.415552 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:38:34.415563 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:38:34.415574 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:38:34.415585 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:38:34.415595 | orchestrator | 2026-04-17 04:38:34.415606 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-17 04:38:34.415617 | orchestrator | Friday 17 April 2026 04:38:32 +0000 (0:00:00.947) 0:00:26.493 ********** 2026-04-17 04:38:34.415628 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:34.415639 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:34.415650 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:34.415660 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:34.415671 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:34.415682 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:34.415693 | orchestrator | 2026-04-17 04:38:34.415704 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-17 04:38:34.415715 | orchestrator | Friday 17 April 2026 04:38:33 +0000 (0:00:00.921) 0:00:27.415 ********** 2026-04-17 04:38:34.415726 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:34.415737 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:34.415747 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:34.415759 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:34.415770 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:34.415780 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:34.415791 | orchestrator | 2026-04-17 04:38:39.573401 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-17 04:38:39.573477 | orchestrator | Friday 17 April 2026 04:38:34 +0000 (0:00:00.621) 0:00:28.037 ********** 2026-04-17 04:38:39.573484 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:38:39.573533 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 04:38:39.573538 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 04:38:39.573543 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:38:39.573547 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 04:38:39.573552 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 04:38:39.573556 | orchestrator | 2026-04-17 04:38:39.573561 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-17 04:38:39.573566 | orchestrator | Friday 17 April 2026 04:38:35 +0000 (0:00:01.548) 0:00:29.586 ********** 2026-04-17 04:38:39.573573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:39.573598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:39.573604 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:39.573620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:39.573625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:39.573630 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:39.573634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:39.573658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:39.573663 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:39.573668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:39.573684 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:39.573689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:39.573693 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:39.573701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:39.573706 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:39.573710 | orchestrator | 2026-04-17 04:38:39.573715 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-17 04:38:39.573719 | orchestrator | Friday 17 April 2026 04:38:36 +0000 (0:00:00.851) 0:00:30.438 ********** 2026-04-17 04:38:39.573723 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:39.573728 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:39.573732 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:39.573736 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:39.573741 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:39.573745 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:39.573749 | orchestrator | 2026-04-17 04:38:39.573753 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-17 04:38:39.573758 | orchestrator | Friday 17 April 2026 04:38:37 +0000 (0:00:00.864) 0:00:31.302 ********** 2026-04-17 04:38:39.573762 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:38:39.573766 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 04:38:39.573771 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 04:38:39.573775 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:38:39.573779 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 04:38:39.573783 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 04:38:39.573788 | orchestrator | 2026-04-17 04:38:39.573792 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-17 04:38:39.573796 | orchestrator | Friday 17 April 2026 04:38:39 +0000 (0:00:01.381) 0:00:32.684 ********** 2026-04-17 04:38:39.573805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:45.488889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:45.489005 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:45.489025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:45.489049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:45.489080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:45.489092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:45.489103 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:45.489115 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:45.489126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:45.489162 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:45.489192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:45.489205 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:45.489216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:45.489245 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:45.489256 | orchestrator | 2026-04-17 04:38:45.489268 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-17 04:38:45.489280 | orchestrator | Friday 17 April 2026 04:38:40 +0000 (0:00:01.174) 0:00:33.858 ********** 2026-04-17 04:38:45.489291 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:45.489302 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:45.489312 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:45.489323 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:45.489333 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:45.489344 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:45.489354 | orchestrator | 2026-04-17 04:38:45.489365 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-17 04:38:45.489382 | orchestrator | Friday 17 April 2026 04:38:41 +0000 (0:00:00.829) 0:00:34.688 ********** 2026-04-17 04:38:45.489395 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:45.489408 | orchestrator | 2026-04-17 04:38:45.489420 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-17 04:38:45.489432 | orchestrator | Friday 17 April 2026 04:38:41 +0000 (0:00:00.174) 0:00:34.862 ********** 2026-04-17 04:38:45.489444 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:45.489456 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:45.489470 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:45.489482 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:45.489494 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:45.489542 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:45.489561 | orchestrator | 2026-04-17 04:38:45.489575 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-17 04:38:45.489595 | orchestrator | Friday 17 April 2026 04:38:41 +0000 (0:00:00.623) 0:00:35.486 ********** 2026-04-17 04:38:45.489613 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:38:45.489644 | orchestrator | 2026-04-17 04:38:45.489662 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-17 04:38:45.489680 | orchestrator | Friday 17 April 2026 04:38:43 +0000 (0:00:01.421) 0:00:36.908 ********** 2026-04-17 04:38:45.489698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:45.489732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:46.012668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:46.012771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:46.012803 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:46.012816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:46.012852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:46.012865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:46.012897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:46.012910 | orchestrator | 2026-04-17 04:38:46.012923 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-17 04:38:46.012935 | orchestrator | Friday 17 April 2026 04:38:45 +0000 (0:00:02.200) 0:00:39.108 ********** 2026-04-17 04:38:46.012948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:46.012965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:46.012978 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:46.012998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:46.013010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:46.013021 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:46.013032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:46.013052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:47.926912 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:47.926992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:47.927000 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:47.927014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:47.927034 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:47.927039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:47.927043 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:47.927047 | orchestrator | 2026-04-17 04:38:47.927053 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-17 04:38:47.927058 | orchestrator | Friday 17 April 2026 04:38:46 +0000 (0:00:00.879) 0:00:39.988 ********** 2026-04-17 04:38:47.927063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:47.927069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:47.927085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:47.927090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:47.927097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:38:47.927105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:38:47.927109 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:38:47.927113 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:38:47.927118 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:38:47.927122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:47.927126 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:38:47.927131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:47.927135 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:38:47.927145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:38:55.208796 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:38:55.208877 | orchestrator | 2026-04-17 04:38:55.208884 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-17 04:38:55.208890 | orchestrator | Friday 17 April 2026 04:38:47 +0000 (0:00:01.554) 0:00:41.542 ********** 2026-04-17 04:38:55.208923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:55.208938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:55.208943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:55.208948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:55.208954 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:55.208968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:55.208977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:55.208985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:55.208989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:38:55.208993 | orchestrator | 2026-04-17 04:38:55.208997 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-17 04:38:55.209001 | orchestrator | Friday 17 April 2026 04:38:50 +0000 (0:00:02.530) 0:00:44.072 ********** 2026-04-17 04:38:55.209004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:55.209009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:38:55.209015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:04.456838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:04.456973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:04.456991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:04.457005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:04.457018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:04.457040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:04.457092 | orchestrator | 2026-04-17 04:39:04.457113 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-17 04:39:04.457157 | orchestrator | Friday 17 April 2026 04:38:55 +0000 (0:00:04.756) 0:00:48.829 ********** 2026-04-17 04:39:04.457178 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:39:04.457199 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 04:39:04.457219 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 04:39:04.457238 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:39:04.457259 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 04:39:04.457278 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 04:39:04.457297 | orchestrator | 2026-04-17 04:39:04.457311 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-17 04:39:04.457329 | orchestrator | Friday 17 April 2026 04:38:56 +0000 (0:00:01.541) 0:00:50.370 ********** 2026-04-17 04:39:04.457346 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:39:04.457365 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:39:04.457384 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:39:04.457403 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:39:04.457421 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:39:04.457435 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:39:04.457445 | orchestrator | 2026-04-17 04:39:04.457465 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-17 04:39:04.457477 | orchestrator | Friday 17 April 2026 04:38:57 +0000 (0:00:00.675) 0:00:51.046 ********** 2026-04-17 04:39:04.457488 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:39:04.457499 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:39:04.457509 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:39:04.457520 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:39:04.457574 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:39:04.457590 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:39:04.457600 | orchestrator | 2026-04-17 04:39:04.457611 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-17 04:39:04.457622 | orchestrator | Friday 17 April 2026 04:38:59 +0000 (0:00:01.686) 0:00:52.733 ********** 2026-04-17 04:39:04.457633 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:39:04.457644 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:39:04.457654 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:39:04.457665 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:39:04.457676 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:39:04.457686 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:39:04.457697 | orchestrator | 2026-04-17 04:39:04.457707 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-17 04:39:04.457718 | orchestrator | Friday 17 April 2026 04:39:00 +0000 (0:00:01.366) 0:00:54.099 ********** 2026-04-17 04:39:04.457729 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:39:04.457739 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 04:39:04.457750 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 04:39:04.457760 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:39:04.457771 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 04:39:04.457782 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 04:39:04.457792 | orchestrator | 2026-04-17 04:39:04.457803 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-17 04:39:04.457814 | orchestrator | Friday 17 April 2026 04:39:02 +0000 (0:00:01.561) 0:00:55.660 ********** 2026-04-17 04:39:04.457826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:04.457852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:04.457864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:04.457893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:05.321029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:05.321131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:05.321165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:05.321179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:05.321190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:05.321201 | orchestrator | 2026-04-17 04:39:05.321212 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-17 04:39:05.321223 | orchestrator | Friday 17 April 2026 04:39:04 +0000 (0:00:02.413) 0:00:58.073 ********** 2026-04-17 04:39:05.321234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:39:05.321268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:39:05.321280 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:39:05.321292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:39:05.321311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:39:05.321322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:39:05.321332 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:39:05.321342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:39:05.321352 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:39:05.321362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:39:05.321372 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:39:05.321393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:39:08.734696 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:39:08.734811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:39:08.734857 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:39:08.734870 | orchestrator | 2026-04-17 04:39:08.734882 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-17 04:39:08.734894 | orchestrator | Friday 17 April 2026 04:39:05 +0000 (0:00:00.866) 0:00:58.940 ********** 2026-04-17 04:39:08.734905 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:39:08.734916 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:39:08.734926 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:39:08.734937 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:39:08.734948 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:39:08.734958 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:39:08.734969 | orchestrator | 2026-04-17 04:39:08.734980 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-17 04:39:08.734991 | orchestrator | Friday 17 April 2026 04:39:06 +0000 (0:00:00.859) 0:00:59.799 ********** 2026-04-17 04:39:08.735004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:39:08.735018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:39:08.735031 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:39:08.735042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:39:08.735069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:39:08.735089 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:39:08.735121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 04:39:08.735137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 04:39:08.735150 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:39:08.735171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:39:08.735191 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:39:08.735209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:39:08.735228 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:39:08.735247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-17 04:39:08.735281 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:39:08.735301 | orchestrator | 2026-04-17 04:39:08.735319 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-04-17 04:39:08.735346 | orchestrator | Friday 17 April 2026 04:39:07 +0000 (0:00:00.884) 0:01:00.683 ********** 2026-04-17 04:39:08.735379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:46.141425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:46.141566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:46.141590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:46.141668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:46.141688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:46.141736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:46.141775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 04:39:46.141794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-17 04:39:46.141811 | orchestrator | 2026-04-17 04:39:46.141829 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-17 04:39:46.141847 | orchestrator | Friday 17 April 2026 04:39:08 +0000 (0:00:01.670) 0:01:02.354 ********** 2026-04-17 04:39:46.141862 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:39:46.141879 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:39:46.141894 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:39:46.141910 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:39:46.141927 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:39:46.141943 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:39:46.141962 | orchestrator | 2026-04-17 04:39:46.141979 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-17 04:39:46.141996 | orchestrator | Friday 17 April 2026 04:39:09 +0000 (0:00:00.634) 0:01:02.989 ********** 2026-04-17 04:39:46.142012 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:39:46.142100 | orchestrator | 2026-04-17 04:39:46.142117 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 04:39:46.142135 | orchestrator | Friday 17 April 2026 04:39:13 +0000 (0:00:04.522) 0:01:07.511 ********** 2026-04-17 04:39:46.142153 | orchestrator | 2026-04-17 04:39:46.142171 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 04:39:46.142190 | orchestrator | Friday 17 April 2026 04:39:13 +0000 (0:00:00.093) 0:01:07.605 ********** 2026-04-17 04:39:46.142205 | orchestrator | 2026-04-17 04:39:46.142221 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 04:39:46.142236 | orchestrator | Friday 17 April 2026 04:39:14 +0000 (0:00:00.090) 0:01:07.695 ********** 2026-04-17 04:39:46.142267 | orchestrator | 2026-04-17 04:39:46.142284 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 04:39:46.142299 | orchestrator | Friday 17 April 2026 04:39:14 +0000 (0:00:00.290) 0:01:07.985 ********** 2026-04-17 04:39:46.142315 | orchestrator | 2026-04-17 04:39:46.142330 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 04:39:46.142346 | orchestrator | Friday 17 April 2026 04:39:14 +0000 (0:00:00.069) 0:01:08.055 ********** 2026-04-17 04:39:46.142361 | orchestrator | 2026-04-17 04:39:46.142376 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 04:39:46.142391 | orchestrator | Friday 17 April 2026 04:39:14 +0000 (0:00:00.068) 0:01:08.124 ********** 2026-04-17 04:39:46.142405 | orchestrator | 2026-04-17 04:39:46.142420 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-17 04:39:46.142436 | orchestrator | Friday 17 April 2026 04:39:14 +0000 (0:00:00.075) 0:01:08.199 ********** 2026-04-17 04:39:46.142450 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:39:46.142465 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:39:46.142480 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:39:46.142495 | orchestrator | 2026-04-17 04:39:46.142510 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-17 04:39:46.142525 | orchestrator | Friday 17 April 2026 04:39:24 +0000 (0:00:10.310) 0:01:18.509 ********** 2026-04-17 04:39:46.142540 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:39:46.142554 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:39:46.142569 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:39:46.142584 | orchestrator | 2026-04-17 04:39:46.142600 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-17 04:39:46.142657 | orchestrator | Friday 17 April 2026 04:39:34 +0000 (0:00:09.799) 0:01:28.309 ********** 2026-04-17 04:39:46.142673 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:39:46.142687 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:39:46.142703 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:39:46.142717 | orchestrator | 2026-04-17 04:39:46.142732 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:39:46.142750 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-17 04:39:46.142767 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 04:39:46.142802 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 04:39:46.668885 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-17 04:39:46.668977 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-17 04:39:46.668989 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-17 04:39:46.668999 | orchestrator | 2026-04-17 04:39:46.669009 | orchestrator | 2026-04-17 04:39:46.669019 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:39:46.669029 | orchestrator | Friday 17 April 2026 04:39:46 +0000 (0:00:11.439) 0:01:39.749 ********** 2026-04-17 04:39:46.669038 | orchestrator | =============================================================================== 2026-04-17 04:39:46.669047 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.44s 2026-04-17 04:39:46.669056 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.31s 2026-04-17 04:39:46.669064 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.80s 2026-04-17 04:39:46.669095 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.76s 2026-04-17 04:39:46.669105 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.52s 2026-04-17 04:39:46.669113 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.81s 2026-04-17 04:39:46.669122 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.28s 2026-04-17 04:39:46.669131 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.25s 2026-04-17 04:39:46.669139 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 2.99s 2026-04-17 04:39:46.669148 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.53s 2026-04-17 04:39:46.669157 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.41s 2026-04-17 04:39:46.669165 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.20s 2026-04-17 04:39:46.669174 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.70s 2026-04-17 04:39:46.669183 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.69s 2026-04-17 04:39:46.669192 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.67s 2026-04-17 04:39:46.669200 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.56s 2026-04-17 04:39:46.669209 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.55s 2026-04-17 04:39:46.669218 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.55s 2026-04-17 04:39:46.669226 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.54s 2026-04-17 04:39:46.669235 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.51s 2026-04-17 04:39:49.115122 | orchestrator | 2026-04-17 04:39:49 | INFO  | Task c8025cfe-70a4-4c07-8c42-b88ea8a13c50 (aodh) was prepared for execution. 2026-04-17 04:39:49.115220 | orchestrator | 2026-04-17 04:39:49 | INFO  | It takes a moment until task c8025cfe-70a4-4c07-8c42-b88ea8a13c50 (aodh) has been started and output is visible here. 2026-04-17 04:40:19.952867 | orchestrator | 2026-04-17 04:40:19.952985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:40:19.953003 | orchestrator | 2026-04-17 04:40:19.953015 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:40:19.953027 | orchestrator | Friday 17 April 2026 04:39:53 +0000 (0:00:00.261) 0:00:00.261 ********** 2026-04-17 04:40:19.953038 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:40:19.953050 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:40:19.953061 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:40:19.953072 | orchestrator | 2026-04-17 04:40:19.953083 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:40:19.953094 | orchestrator | Friday 17 April 2026 04:39:53 +0000 (0:00:00.331) 0:00:00.592 ********** 2026-04-17 04:40:19.953105 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-17 04:40:19.953117 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-17 04:40:19.953128 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-17 04:40:19.953139 | orchestrator | 2026-04-17 04:40:19.953150 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-17 04:40:19.953161 | orchestrator | 2026-04-17 04:40:19.953172 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-17 04:40:19.953183 | orchestrator | Friday 17 April 2026 04:39:54 +0000 (0:00:00.521) 0:00:01.114 ********** 2026-04-17 04:40:19.953194 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:40:19.953205 | orchestrator | 2026-04-17 04:40:19.953216 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-04-17 04:40:19.953227 | orchestrator | Friday 17 April 2026 04:39:54 +0000 (0:00:00.600) 0:00:01.714 ********** 2026-04-17 04:40:19.953267 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-04-17 04:40:19.953285 | orchestrator | 2026-04-17 04:40:19.953304 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-04-17 04:40:19.953323 | orchestrator | Friday 17 April 2026 04:39:58 +0000 (0:00:03.216) 0:00:04.931 ********** 2026-04-17 04:40:19.953341 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-04-17 04:40:19.953359 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-04-17 04:40:19.953377 | orchestrator | 2026-04-17 04:40:19.953394 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-04-17 04:40:19.953413 | orchestrator | Friday 17 April 2026 04:40:04 +0000 (0:00:06.189) 0:00:11.120 ********** 2026-04-17 04:40:19.953431 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:40:19.953450 | orchestrator | 2026-04-17 04:40:19.953469 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-04-17 04:40:19.953486 | orchestrator | Friday 17 April 2026 04:40:07 +0000 (0:00:03.217) 0:00:14.338 ********** 2026-04-17 04:40:19.953499 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:40:19.953511 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-04-17 04:40:19.953523 | orchestrator | 2026-04-17 04:40:19.953536 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-04-17 04:40:19.953548 | orchestrator | Friday 17 April 2026 04:40:11 +0000 (0:00:03.684) 0:00:18.022 ********** 2026-04-17 04:40:19.953560 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:40:19.953573 | orchestrator | 2026-04-17 04:40:19.953585 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-04-17 04:40:19.953597 | orchestrator | Friday 17 April 2026 04:40:14 +0000 (0:00:03.100) 0:00:21.123 ********** 2026-04-17 04:40:19.953610 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-04-17 04:40:19.953622 | orchestrator | 2026-04-17 04:40:19.953634 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-17 04:40:19.953646 | orchestrator | Friday 17 April 2026 04:40:17 +0000 (0:00:03.617) 0:00:24.741 ********** 2026-04-17 04:40:19.953690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:19.953730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:19.953758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:19.953771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:19.953784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:19.953795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:19.953807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:19.953826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:21.276007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:21.276137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:21.276153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:21.276165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:21.276176 | orchestrator | 2026-04-17 04:40:21.276189 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-17 04:40:21.276202 | orchestrator | Friday 17 April 2026 04:40:19 +0000 (0:00:02.006) 0:00:26.747 ********** 2026-04-17 04:40:21.276213 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:40:21.276225 | orchestrator | 2026-04-17 04:40:21.276239 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-17 04:40:21.276258 | orchestrator | Friday 17 April 2026 04:40:20 +0000 (0:00:00.147) 0:00:26.895 ********** 2026-04-17 04:40:21.276276 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:40:21.276295 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:40:21.276312 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:40:21.276329 | orchestrator | 2026-04-17 04:40:21.276347 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-17 04:40:21.276365 | orchestrator | Friday 17 April 2026 04:40:20 +0000 (0:00:00.537) 0:00:27.432 ********** 2026-04-17 04:40:21.276385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:21.276436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:21.276450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:21.276462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:21.276474 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:40:21.276485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:21.276497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:21.276508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:21.276535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:26.081192 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:40:26.081303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:26.081323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:26.081336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:26.081348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:26.081360 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:40:26.081371 | orchestrator | 2026-04-17 04:40:26.081383 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-17 04:40:26.081395 | orchestrator | Friday 17 April 2026 04:40:21 +0000 (0:00:00.641) 0:00:28.074 ********** 2026-04-17 04:40:26.081407 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:40:26.081442 | orchestrator | 2026-04-17 04:40:26.081461 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-17 04:40:26.081479 | orchestrator | Friday 17 April 2026 04:40:22 +0000 (0:00:00.783) 0:00:28.857 ********** 2026-04-17 04:40:26.081500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:26.081545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:26.081568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:26.081589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:26.081610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:26.081643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:26.081664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:26.081725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:26.744241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:26.744346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:26.744361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:26.744373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:26.744410 | orchestrator | 2026-04-17 04:40:26.744424 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-17 04:40:26.744437 | orchestrator | Friday 17 April 2026 04:40:26 +0000 (0:00:04.017) 0:00:32.874 ********** 2026-04-17 04:40:26.744450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:26.744462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:26.744493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:26.744538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:26.744550 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:40:26.744562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:26.744582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:26.744594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:26.744605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:26.744616 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:40:26.744637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:27.925354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:27.925457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:27.925497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:27.925511 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:40:27.925524 | orchestrator | 2026-04-17 04:40:27.925536 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-17 04:40:27.925549 | orchestrator | Friday 17 April 2026 04:40:26 +0000 (0:00:00.677) 0:00:33.552 ********** 2026-04-17 04:40:27.925560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:27.925574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:27.925586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:27.925629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:27.925642 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:40:27.925653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:27.925710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:27.925725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:27.925737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:27.925748 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:40:27.925769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 04:40:31.971021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 04:40:31.971157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 04:40:31.971174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 04:40:31.971187 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:40:31.971202 | orchestrator | 2026-04-17 04:40:31.971213 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-17 04:40:31.971226 | orchestrator | Friday 17 April 2026 04:40:27 +0000 (0:00:01.175) 0:00:34.727 ********** 2026-04-17 04:40:31.971238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:31.971252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:31.971283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:31.971303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:31.971315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:31.971326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:31.971337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:31.971349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:31.971360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:31.971379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452369 | orchestrator | 2026-04-17 04:40:40.452391 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-17 04:40:40.452412 | orchestrator | Friday 17 April 2026 04:40:31 +0000 (0:00:04.040) 0:00:38.768 ********** 2026-04-17 04:40:40.452432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:40.452453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:40.452474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:40.452548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:40.452652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585152 | orchestrator | 2026-04-17 04:40:45.585169 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-17 04:40:45.585182 | orchestrator | Friday 17 April 2026 04:40:40 +0000 (0:00:08.483) 0:00:47.251 ********** 2026-04-17 04:40:45.585194 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:40:45.585206 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:40:45.585216 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:40:45.585227 | orchestrator | 2026-04-17 04:40:45.585238 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-04-17 04:40:45.585249 | orchestrator | Friday 17 April 2026 04:40:42 +0000 (0:00:01.835) 0:00:49.087 ********** 2026-04-17 04:40:45.585262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:45.585276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:45.585315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 04:40:45.585346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:40:45.585459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:41:37.083103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 04:41:37.083204 | orchestrator | 2026-04-17 04:41:37.083218 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-17 04:41:37.083229 | orchestrator | Friday 17 April 2026 04:40:45 +0000 (0:00:03.296) 0:00:52.383 ********** 2026-04-17 04:41:37.083238 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:41:37.083248 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:41:37.083257 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:41:37.083266 | orchestrator | 2026-04-17 04:41:37.083275 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-04-17 04:41:37.083284 | orchestrator | Friday 17 April 2026 04:40:45 +0000 (0:00:00.318) 0:00:52.702 ********** 2026-04-17 04:41:37.083289 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:41:37.083295 | orchestrator | 2026-04-17 04:41:37.083300 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-04-17 04:41:37.083305 | orchestrator | Friday 17 April 2026 04:40:47 +0000 (0:00:02.089) 0:00:54.791 ********** 2026-04-17 04:41:37.083311 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:41:37.083316 | orchestrator | 2026-04-17 04:41:37.083321 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-17 04:41:37.083344 | orchestrator | Friday 17 April 2026 04:40:50 +0000 (0:00:02.176) 0:00:56.967 ********** 2026-04-17 04:41:37.083349 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:41:37.083354 | orchestrator | 2026-04-17 04:41:37.083360 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-17 04:41:37.083365 | orchestrator | Friday 17 April 2026 04:41:02 +0000 (0:00:12.105) 0:01:09.073 ********** 2026-04-17 04:41:37.083370 | orchestrator | 2026-04-17 04:41:37.083375 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-17 04:41:37.083380 | orchestrator | Friday 17 April 2026 04:41:02 +0000 (0:00:00.071) 0:01:09.145 ********** 2026-04-17 04:41:37.083385 | orchestrator | 2026-04-17 04:41:37.083390 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-17 04:41:37.083395 | orchestrator | Friday 17 April 2026 04:41:02 +0000 (0:00:00.070) 0:01:09.215 ********** 2026-04-17 04:41:37.083400 | orchestrator | 2026-04-17 04:41:37.083405 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-17 04:41:37.083410 | orchestrator | Friday 17 April 2026 04:41:02 +0000 (0:00:00.276) 0:01:09.492 ********** 2026-04-17 04:41:37.083415 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:41:37.083420 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:41:37.083425 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:41:37.083431 | orchestrator | 2026-04-17 04:41:37.083436 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-17 04:41:37.083441 | orchestrator | Friday 17 April 2026 04:41:13 +0000 (0:00:10.393) 0:01:19.885 ********** 2026-04-17 04:41:37.083446 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:41:37.083451 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:41:37.083456 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:41:37.083461 | orchestrator | 2026-04-17 04:41:37.083466 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-17 04:41:37.083471 | orchestrator | Friday 17 April 2026 04:41:21 +0000 (0:00:08.130) 0:01:28.015 ********** 2026-04-17 04:41:37.083477 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:41:37.083482 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:41:37.083487 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:41:37.083492 | orchestrator | 2026-04-17 04:41:37.083497 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-17 04:41:37.083502 | orchestrator | Friday 17 April 2026 04:41:31 +0000 (0:00:10.074) 0:01:38.090 ********** 2026-04-17 04:41:37.083507 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:41:37.083512 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:41:37.083517 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:41:37.083522 | orchestrator | 2026-04-17 04:41:37.083527 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:41:37.083533 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 04:41:37.083539 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:41:37.083544 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:41:37.083549 | orchestrator | 2026-04-17 04:41:37.083555 | orchestrator | 2026-04-17 04:41:37.083560 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:41:37.083565 | orchestrator | Friday 17 April 2026 04:41:36 +0000 (0:00:05.370) 0:01:43.460 ********** 2026-04-17 04:41:37.083570 | orchestrator | =============================================================================== 2026-04-17 04:41:37.083575 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.11s 2026-04-17 04:41:37.083580 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.39s 2026-04-17 04:41:37.083597 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.07s 2026-04-17 04:41:37.083606 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.48s 2026-04-17 04:41:37.083612 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 8.13s 2026-04-17 04:41:37.083617 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.19s 2026-04-17 04:41:37.083621 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.37s 2026-04-17 04:41:37.083626 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.04s 2026-04-17 04:41:37.083631 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.02s 2026-04-17 04:41:37.083637 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.68s 2026-04-17 04:41:37.083642 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.62s 2026-04-17 04:41:37.083647 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.30s 2026-04-17 04:41:37.083652 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.22s 2026-04-17 04:41:37.083657 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.22s 2026-04-17 04:41:37.083662 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.10s 2026-04-17 04:41:37.083668 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.18s 2026-04-17 04:41:37.083674 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.09s 2026-04-17 04:41:37.083680 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.01s 2026-04-17 04:41:37.083686 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.84s 2026-04-17 04:41:37.083692 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.18s 2026-04-17 04:41:39.587760 | orchestrator | 2026-04-17 04:41:39 | INFO  | Task 57dfdb03-db6e-4146-be4c-6ae25aa6c909 (kolla-ceph-rgw) was prepared for execution. 2026-04-17 04:41:39.587945 | orchestrator | 2026-04-17 04:41:39 | INFO  | It takes a moment until task 57dfdb03-db6e-4146-be4c-6ae25aa6c909 (kolla-ceph-rgw) has been started and output is visible here. 2026-04-17 04:42:16.023526 | orchestrator | 2026-04-17 04:42:16.023684 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:42:16.023702 | orchestrator | 2026-04-17 04:42:16.023714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:42:16.023725 | orchestrator | Friday 17 April 2026 04:41:43 +0000 (0:00:00.288) 0:00:00.288 ********** 2026-04-17 04:42:16.023737 | orchestrator | ok: [testbed-manager] 2026-04-17 04:42:16.023750 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:42:16.023761 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:42:16.023772 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:42:16.023782 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:42:16.023793 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:42:16.023804 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:42:16.023815 | orchestrator | 2026-04-17 04:42:16.023826 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:42:16.023885 | orchestrator | Friday 17 April 2026 04:41:44 +0000 (0:00:00.901) 0:00:01.189 ********** 2026-04-17 04:42:16.023899 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-17 04:42:16.023911 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-17 04:42:16.023922 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-17 04:42:16.023933 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-17 04:42:16.023944 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-17 04:42:16.023955 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-17 04:42:16.023965 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-17 04:42:16.023976 | orchestrator | 2026-04-17 04:42:16.023987 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-17 04:42:16.024028 | orchestrator | 2026-04-17 04:42:16.024042 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-17 04:42:16.024054 | orchestrator | Friday 17 April 2026 04:41:45 +0000 (0:00:00.795) 0:00:01.984 ********** 2026-04-17 04:42:16.024067 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:42:16.024083 | orchestrator | 2026-04-17 04:42:16.024096 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-17 04:42:16.024108 | orchestrator | Friday 17 April 2026 04:41:47 +0000 (0:00:01.696) 0:00:03.681 ********** 2026-04-17 04:42:16.024121 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-17 04:42:16.024135 | orchestrator | 2026-04-17 04:42:16.024147 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-17 04:42:16.024161 | orchestrator | Friday 17 April 2026 04:41:51 +0000 (0:00:03.935) 0:00:07.616 ********** 2026-04-17 04:42:16.024174 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-17 04:42:16.024189 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-17 04:42:16.024202 | orchestrator | 2026-04-17 04:42:16.024214 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-17 04:42:16.024227 | orchestrator | Friday 17 April 2026 04:41:57 +0000 (0:00:06.448) 0:00:14.065 ********** 2026-04-17 04:42:16.024240 | orchestrator | ok: [testbed-manager] => (item=service) 2026-04-17 04:42:16.024252 | orchestrator | 2026-04-17 04:42:16.024264 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-17 04:42:16.024277 | orchestrator | Friday 17 April 2026 04:42:00 +0000 (0:00:03.052) 0:00:17.118 ********** 2026-04-17 04:42:16.024290 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:42:16.024303 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-17 04:42:16.024315 | orchestrator | 2026-04-17 04:42:16.024327 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-17 04:42:16.024340 | orchestrator | Friday 17 April 2026 04:42:04 +0000 (0:00:03.689) 0:00:20.807 ********** 2026-04-17 04:42:16.024353 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-17 04:42:16.024366 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-17 04:42:16.024377 | orchestrator | 2026-04-17 04:42:16.024387 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-17 04:42:16.024398 | orchestrator | Friday 17 April 2026 04:42:10 +0000 (0:00:06.017) 0:00:26.825 ********** 2026-04-17 04:42:16.024409 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-17 04:42:16.024419 | orchestrator | 2026-04-17 04:42:16.024430 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:42:16.024441 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:16.024453 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:16.024464 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:16.024475 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:16.024485 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:16.024518 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:16.024539 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:16.024550 | orchestrator | 2026-04-17 04:42:16.024561 | orchestrator | 2026-04-17 04:42:16.024571 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:42:16.024582 | orchestrator | Friday 17 April 2026 04:42:15 +0000 (0:00:04.878) 0:00:31.703 ********** 2026-04-17 04:42:16.024593 | orchestrator | =============================================================================== 2026-04-17 04:42:16.024604 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.45s 2026-04-17 04:42:16.024614 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.02s 2026-04-17 04:42:16.024631 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.88s 2026-04-17 04:42:16.024642 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.94s 2026-04-17 04:42:16.024653 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.69s 2026-04-17 04:42:16.024663 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.05s 2026-04-17 04:42:16.024674 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.70s 2026-04-17 04:42:16.024685 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2026-04-17 04:42:16.024695 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-04-17 04:42:18.470584 | orchestrator | 2026-04-17 04:42:18 | INFO  | Task 90c2386a-4744-49e0-a9cf-f922076c0bfa (gnocchi) was prepared for execution. 2026-04-17 04:42:18.470736 | orchestrator | 2026-04-17 04:42:18 | INFO  | It takes a moment until task 90c2386a-4744-49e0-a9cf-f922076c0bfa (gnocchi) has been started and output is visible here. 2026-04-17 04:42:23.846363 | orchestrator | 2026-04-17 04:42:23.846536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:42:23.846553 | orchestrator | 2026-04-17 04:42:23.846565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:42:23.846577 | orchestrator | Friday 17 April 2026 04:42:22 +0000 (0:00:00.282) 0:00:00.282 ********** 2026-04-17 04:42:23.846589 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:42:23.846602 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:42:23.846612 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:42:23.846623 | orchestrator | 2026-04-17 04:42:23.846634 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:42:23.846645 | orchestrator | Friday 17 April 2026 04:42:23 +0000 (0:00:00.325) 0:00:00.608 ********** 2026-04-17 04:42:23.846656 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-17 04:42:23.846667 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-17 04:42:23.846679 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-17 04:42:23.846690 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-17 04:42:23.846701 | orchestrator | 2026-04-17 04:42:23.846711 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-17 04:42:23.846722 | orchestrator | skipping: no hosts matched 2026-04-17 04:42:23.846735 | orchestrator | 2026-04-17 04:42:23.846746 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:42:23.846757 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:23.846770 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:23.846780 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:42:23.846791 | orchestrator | 2026-04-17 04:42:23.846802 | orchestrator | 2026-04-17 04:42:23.846846 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:42:23.846860 | orchestrator | Friday 17 April 2026 04:42:23 +0000 (0:00:00.407) 0:00:01.015 ********** 2026-04-17 04:42:23.846892 | orchestrator | =============================================================================== 2026-04-17 04:42:23.846905 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-04-17 04:42:23.846918 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-17 04:42:26.288607 | orchestrator | 2026-04-17 04:42:26 | INFO  | Task a80a65cd-e42b-4cf4-a802-913b797ac9e7 (manila) was prepared for execution. 2026-04-17 04:42:26.288738 | orchestrator | 2026-04-17 04:42:26 | INFO  | It takes a moment until task a80a65cd-e42b-4cf4-a802-913b797ac9e7 (manila) has been started and output is visible here. 2026-04-17 04:43:06.116209 | orchestrator | 2026-04-17 04:43:06.116338 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:43:06.116354 | orchestrator | 2026-04-17 04:43:06.116367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:43:06.116379 | orchestrator | Friday 17 April 2026 04:42:30 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-04-17 04:43:06.116391 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:43:06.116403 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:43:06.116414 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:43:06.116425 | orchestrator | 2026-04-17 04:43:06.116436 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:43:06.116447 | orchestrator | Friday 17 April 2026 04:42:30 +0000 (0:00:00.328) 0:00:00.597 ********** 2026-04-17 04:43:06.116458 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-17 04:43:06.116470 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-17 04:43:06.116481 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-17 04:43:06.116492 | orchestrator | 2026-04-17 04:43:06.116503 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-17 04:43:06.116514 | orchestrator | 2026-04-17 04:43:06.116525 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-17 04:43:06.116536 | orchestrator | Friday 17 April 2026 04:42:31 +0000 (0:00:00.466) 0:00:01.064 ********** 2026-04-17 04:43:06.116547 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:43:06.116559 | orchestrator | 2026-04-17 04:43:06.116587 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-17 04:43:06.116599 | orchestrator | Friday 17 April 2026 04:42:32 +0000 (0:00:00.631) 0:00:01.696 ********** 2026-04-17 04:43:06.116610 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:43:06.116623 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:43:06.116633 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:43:06.116644 | orchestrator | 2026-04-17 04:43:06.116655 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-04-17 04:43:06.116666 | orchestrator | Friday 17 April 2026 04:42:32 +0000 (0:00:00.513) 0:00:02.210 ********** 2026-04-17 04:43:06.116677 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-04-17 04:43:06.116688 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-04-17 04:43:06.116699 | orchestrator | 2026-04-17 04:43:06.116710 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-04-17 04:43:06.116721 | orchestrator | Friday 17 April 2026 04:42:38 +0000 (0:00:06.151) 0:00:08.361 ********** 2026-04-17 04:43:06.116735 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-04-17 04:43:06.116748 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-04-17 04:43:06.116761 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-04-17 04:43:06.116796 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-04-17 04:43:06.116809 | orchestrator | 2026-04-17 04:43:06.116823 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-04-17 04:43:06.116836 | orchestrator | Friday 17 April 2026 04:42:50 +0000 (0:00:11.770) 0:00:20.131 ********** 2026-04-17 04:43:06.116848 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:43:06.116861 | orchestrator | 2026-04-17 04:43:06.116874 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-04-17 04:43:06.116887 | orchestrator | Friday 17 April 2026 04:42:53 +0000 (0:00:03.115) 0:00:23.247 ********** 2026-04-17 04:43:06.116899 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:43:06.116911 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-04-17 04:43:06.116924 | orchestrator | 2026-04-17 04:43:06.116961 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-04-17 04:43:06.116974 | orchestrator | Friday 17 April 2026 04:42:57 +0000 (0:00:03.676) 0:00:26.923 ********** 2026-04-17 04:43:06.116986 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:43:06.116999 | orchestrator | 2026-04-17 04:43:06.117012 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-04-17 04:43:06.117024 | orchestrator | Friday 17 April 2026 04:43:00 +0000 (0:00:03.054) 0:00:29.977 ********** 2026-04-17 04:43:06.117037 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-04-17 04:43:06.117049 | orchestrator | 2026-04-17 04:43:06.117063 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-17 04:43:06.117076 | orchestrator | Friday 17 April 2026 04:43:03 +0000 (0:00:03.522) 0:00:33.499 ********** 2026-04-17 04:43:06.117110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:06.117126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:06.117144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:06.117166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:06.117180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:06.117191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:06.117212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:16.796741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:16.796907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:16.797017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:16.797042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:16.797062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:16.797083 | orchestrator | 2026-04-17 04:43:16.797105 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-17 04:43:16.797127 | orchestrator | Friday 17 April 2026 04:43:06 +0000 (0:00:02.388) 0:00:35.888 ********** 2026-04-17 04:43:16.797146 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:43:16.797166 | orchestrator | 2026-04-17 04:43:16.797185 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-17 04:43:16.797205 | orchestrator | Friday 17 April 2026 04:43:06 +0000 (0:00:00.612) 0:00:36.500 ********** 2026-04-17 04:43:16.797224 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:43:16.797242 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:43:16.797253 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:43:16.797264 | orchestrator | 2026-04-17 04:43:16.797277 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-17 04:43:16.797290 | orchestrator | Friday 17 April 2026 04:43:07 +0000 (0:00:01.111) 0:00:37.611 ********** 2026-04-17 04:43:16.797303 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 04:43:16.797337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 04:43:16.797351 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 04:43:16.797363 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 04:43:16.797387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 04:43:16.797408 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 04:43:16.797421 | orchestrator | 2026-04-17 04:43:16.797434 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-17 04:43:16.797447 | orchestrator | Friday 17 April 2026 04:43:09 +0000 (0:00:01.778) 0:00:39.390 ********** 2026-04-17 04:43:16.797459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 04:43:16.797472 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 04:43:16.797485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 04:43:16.797498 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 04:43:16.797510 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 04:43:16.797523 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 04:43:16.797535 | orchestrator | 2026-04-17 04:43:16.797547 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-17 04:43:16.797560 | orchestrator | Friday 17 April 2026 04:43:10 +0000 (0:00:01.241) 0:00:40.631 ********** 2026-04-17 04:43:16.797573 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-17 04:43:16.797587 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-17 04:43:16.797599 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-17 04:43:16.797611 | orchestrator | 2026-04-17 04:43:16.797623 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-17 04:43:16.797634 | orchestrator | Friday 17 April 2026 04:43:11 +0000 (0:00:00.681) 0:00:41.313 ********** 2026-04-17 04:43:16.797645 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:43:16.797655 | orchestrator | 2026-04-17 04:43:16.797666 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-17 04:43:16.797677 | orchestrator | Friday 17 April 2026 04:43:11 +0000 (0:00:00.151) 0:00:41.464 ********** 2026-04-17 04:43:16.797688 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:43:16.797699 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:43:16.797709 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:43:16.797720 | orchestrator | 2026-04-17 04:43:16.797730 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-17 04:43:16.797741 | orchestrator | Friday 17 April 2026 04:43:12 +0000 (0:00:00.550) 0:00:42.015 ********** 2026-04-17 04:43:16.797752 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:43:16.797763 | orchestrator | 2026-04-17 04:43:16.797773 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-17 04:43:16.797784 | orchestrator | Friday 17 April 2026 04:43:12 +0000 (0:00:00.623) 0:00:42.638 ********** 2026-04-17 04:43:16.797803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:17.719127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:17.719244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:17.719262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:17.719423 | orchestrator | 2026-04-17 04:43:17.719436 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-17 04:43:17.719449 | orchestrator | Friday 17 April 2026 04:43:16 +0000 (0:00:03.942) 0:00:46.580 ********** 2026-04-17 04:43:17.719469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:18.437352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437452 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:43:18.437462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:18.437486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437522 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:43:18.437529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:18.437535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:18.437558 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:43:18.437564 | orchestrator | 2026-04-17 04:43:18.437571 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-17 04:43:18.437579 | orchestrator | Friday 17 April 2026 04:43:17 +0000 (0:00:00.932) 0:00:47.512 ********** 2026-04-17 04:43:18.437590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:23.007895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008094 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:43:23.008107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:23.008118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008169 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:43:23.008179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:23.008195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:23.008223 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:43:23.008232 | orchestrator | 2026-04-17 04:43:23.008242 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-17 04:43:23.008252 | orchestrator | Friday 17 April 2026 04:43:18 +0000 (0:00:00.934) 0:00:48.447 ********** 2026-04-17 04:43:23.008272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:29.771891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:29.772076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:29.772132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:29.772334 | orchestrator | 2026-04-17 04:43:29.772353 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-17 04:43:29.772369 | orchestrator | Friday 17 April 2026 04:43:23 +0000 (0:00:04.584) 0:00:53.031 ********** 2026-04-17 04:43:29.772402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:34.093728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:34.093910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:43:34.093942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:34.093963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:34.094095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:34.094142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:34.094191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:34.094203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:34.094214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:34.094224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:34.094234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:43:34.094247 | orchestrator | 2026-04-17 04:43:34.094260 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-17 04:43:34.094273 | orchestrator | Friday 17 April 2026 04:43:29 +0000 (0:00:06.523) 0:00:59.555 ********** 2026-04-17 04:43:34.094285 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-17 04:43:34.094296 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-17 04:43:34.094307 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-17 04:43:34.094319 | orchestrator | 2026-04-17 04:43:34.094334 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-17 04:43:34.094346 | orchestrator | Friday 17 April 2026 04:43:33 +0000 (0:00:03.666) 0:01:03.222 ********** 2026-04-17 04:43:34.094373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:37.358290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358442 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:43:37.358456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:37.358485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358562 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:43:37.358573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 04:43:37.358585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 04:43:37.358633 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:43:37.358645 | orchestrator | 2026-04-17 04:43:37.358657 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-04-17 04:43:37.358669 | orchestrator | Friday 17 April 2026 04:43:34 +0000 (0:00:00.651) 0:01:03.873 ********** 2026-04-17 04:43:37.358689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:44:15.481705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:44:15.481828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 04:44:15.481854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.481926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.481943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.481973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.481988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.481999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.482011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.482151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.482182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 04:44:15.482195 | orchestrator | 2026-04-17 04:44:15.482208 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-04-17 04:44:15.482221 | orchestrator | Friday 17 April 2026 04:43:37 +0000 (0:00:03.272) 0:01:07.146 ********** 2026-04-17 04:44:15.482232 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:44:15.482247 | orchestrator | 2026-04-17 04:44:15.482259 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-04-17 04:44:15.482271 | orchestrator | Friday 17 April 2026 04:43:39 +0000 (0:00:02.005) 0:01:09.151 ********** 2026-04-17 04:44:15.482284 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:44:15.482296 | orchestrator | 2026-04-17 04:44:15.482308 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-17 04:44:15.482319 | orchestrator | Friday 17 April 2026 04:43:41 +0000 (0:00:02.135) 0:01:11.286 ********** 2026-04-17 04:44:15.482331 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:44:15.482343 | orchestrator | 2026-04-17 04:44:15.482355 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-17 04:44:15.482368 | orchestrator | Friday 17 April 2026 04:44:15 +0000 (0:00:33.605) 0:01:44.892 ********** 2026-04-17 04:44:15.482380 | orchestrator | 2026-04-17 04:44:15.482403 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-17 04:45:05.908467 | orchestrator | Friday 17 April 2026 04:44:15 +0000 (0:00:00.091) 0:01:44.983 ********** 2026-04-17 04:45:05.908589 | orchestrator | 2026-04-17 04:45:05.908607 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-17 04:45:05.908619 | orchestrator | Friday 17 April 2026 04:44:15 +0000 (0:00:00.073) 0:01:45.057 ********** 2026-04-17 04:45:05.908630 | orchestrator | 2026-04-17 04:45:05.908641 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-17 04:45:05.908652 | orchestrator | Friday 17 April 2026 04:44:15 +0000 (0:00:00.095) 0:01:45.152 ********** 2026-04-17 04:45:05.908664 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:45:05.908676 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:45:05.908687 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:45:05.908698 | orchestrator | 2026-04-17 04:45:05.908709 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-17 04:45:05.908719 | orchestrator | Friday 17 April 2026 04:44:30 +0000 (0:00:15.115) 0:02:00.268 ********** 2026-04-17 04:45:05.908730 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:45:05.908741 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:45:05.908752 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:45:05.908762 | orchestrator | 2026-04-17 04:45:05.908773 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-17 04:45:05.908784 | orchestrator | Friday 17 April 2026 04:44:36 +0000 (0:00:05.890) 0:02:06.159 ********** 2026-04-17 04:45:05.908824 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:45:05.908835 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:45:05.908846 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:45:05.908857 | orchestrator | 2026-04-17 04:45:05.908868 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-17 04:45:05.908878 | orchestrator | Friday 17 April 2026 04:44:46 +0000 (0:00:10.202) 0:02:16.361 ********** 2026-04-17 04:45:05.908889 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:45:05.908900 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:45:05.908910 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:45:05.908921 | orchestrator | 2026-04-17 04:45:05.908932 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:45:05.908944 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 04:45:05.908955 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:45:05.908966 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:45:05.908977 | orchestrator | 2026-04-17 04:45:05.908987 | orchestrator | 2026-04-17 04:45:05.908998 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:45:05.909012 | orchestrator | Friday 17 April 2026 04:45:05 +0000 (0:00:18.660) 0:02:35.022 ********** 2026-04-17 04:45:05.909025 | orchestrator | =============================================================================== 2026-04-17 04:45:05.909038 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 33.61s 2026-04-17 04:45:05.909051 | orchestrator | manila : Restart manila-share container -------------------------------- 18.66s 2026-04-17 04:45:05.909064 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.12s 2026-04-17 04:45:05.909077 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 11.77s 2026-04-17 04:45:05.909090 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.20s 2026-04-17 04:45:05.909100 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.52s 2026-04-17 04:45:05.909125 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.15s 2026-04-17 04:45:05.909161 | orchestrator | manila : Restart manila-data container ---------------------------------- 5.89s 2026-04-17 04:45:05.909173 | orchestrator | manila : Copying over config.json files for services -------------------- 4.58s 2026-04-17 04:45:05.909183 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.94s 2026-04-17 04:45:05.909194 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.68s 2026-04-17 04:45:05.909205 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.67s 2026-04-17 04:45:05.909215 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.52s 2026-04-17 04:45:05.909226 | orchestrator | manila : Check manila containers ---------------------------------------- 3.27s 2026-04-17 04:45:05.909237 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.12s 2026-04-17 04:45:05.909248 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.05s 2026-04-17 04:45:05.909258 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.39s 2026-04-17 04:45:05.909269 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.14s 2026-04-17 04:45:05.909280 | orchestrator | manila : Creating Manila database --------------------------------------- 2.01s 2026-04-17 04:45:05.909291 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.78s 2026-04-17 04:45:06.309319 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-04-17 04:45:18.397665 | orchestrator | 2026-04-17 04:45:18 | INFO  | Task 13feee39-b069-476d-a6d1-9fe2cdd6b3a4 (netdata) was prepared for execution. 2026-04-17 04:45:18.397823 | orchestrator | 2026-04-17 04:45:18 | INFO  | It takes a moment until task 13feee39-b069-476d-a6d1-9fe2cdd6b3a4 (netdata) has been started and output is visible here. 2026-04-17 04:46:40.388074 | orchestrator | 2026-04-17 04:46:40.388194 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:46:40.388211 | orchestrator | 2026-04-17 04:46:40.388223 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:46:40.388234 | orchestrator | Friday 17 April 2026 04:45:22 +0000 (0:00:00.235) 0:00:00.235 ********** 2026-04-17 04:46:40.388246 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-17 04:46:40.388257 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-17 04:46:40.388268 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-17 04:46:40.388279 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-17 04:46:40.388289 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-17 04:46:40.388300 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-17 04:46:40.388311 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-17 04:46:40.388321 | orchestrator | 2026-04-17 04:46:40.388378 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-17 04:46:40.388390 | orchestrator | 2026-04-17 04:46:40.388401 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-17 04:46:40.388412 | orchestrator | Friday 17 April 2026 04:45:23 +0000 (0:00:00.950) 0:00:01.185 ********** 2026-04-17 04:46:40.388424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:46:40.388438 | orchestrator | 2026-04-17 04:46:40.388449 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-17 04:46:40.388460 | orchestrator | Friday 17 April 2026 04:45:25 +0000 (0:00:01.409) 0:00:02.595 ********** 2026-04-17 04:46:40.388471 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:46:40.388483 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:46:40.388494 | orchestrator | ok: [testbed-manager] 2026-04-17 04:46:40.388505 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:46:40.388516 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:46:40.388527 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:46:40.388538 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:46:40.388549 | orchestrator | 2026-04-17 04:46:40.388560 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-17 04:46:40.388571 | orchestrator | Friday 17 April 2026 04:45:27 +0000 (0:00:01.954) 0:00:04.549 ********** 2026-04-17 04:46:40.388581 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:46:40.388592 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:46:40.388603 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:46:40.388613 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:46:40.388624 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:46:40.388635 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:46:40.388645 | orchestrator | ok: [testbed-manager] 2026-04-17 04:46:40.388656 | orchestrator | 2026-04-17 04:46:40.388667 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-17 04:46:40.388678 | orchestrator | Friday 17 April 2026 04:45:29 +0000 (0:00:02.291) 0:00:06.840 ********** 2026-04-17 04:46:40.388689 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:46:40.388700 | orchestrator | changed: [testbed-manager] 2026-04-17 04:46:40.388711 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:46:40.388721 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:46:40.388732 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:46:40.388742 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:46:40.388753 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:46:40.388789 | orchestrator | 2026-04-17 04:46:40.388800 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-17 04:46:40.388811 | orchestrator | Friday 17 April 2026 04:45:31 +0000 (0:00:01.547) 0:00:08.388 ********** 2026-04-17 04:46:40.388822 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:46:40.388832 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:46:40.388858 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:46:40.388869 | orchestrator | changed: [testbed-manager] 2026-04-17 04:46:40.388880 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:46:40.388890 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:46:40.388901 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:46:40.388911 | orchestrator | 2026-04-17 04:46:40.388922 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-17 04:46:40.388933 | orchestrator | Friday 17 April 2026 04:45:48 +0000 (0:00:17.746) 0:00:26.134 ********** 2026-04-17 04:46:40.388944 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:46:40.388954 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:46:40.388964 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:46:40.388975 | orchestrator | changed: [testbed-manager] 2026-04-17 04:46:40.388985 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:46:40.388996 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:46:40.389007 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:46:40.389017 | orchestrator | 2026-04-17 04:46:40.389028 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-17 04:46:40.389039 | orchestrator | Friday 17 April 2026 04:46:13 +0000 (0:00:24.763) 0:00:50.898 ********** 2026-04-17 04:46:40.389050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:46:40.389063 | orchestrator | 2026-04-17 04:46:40.389074 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-17 04:46:40.389084 | orchestrator | Friday 17 April 2026 04:46:15 +0000 (0:00:01.688) 0:00:52.587 ********** 2026-04-17 04:46:40.389095 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-17 04:46:40.389106 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-17 04:46:40.389117 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-17 04:46:40.389128 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-17 04:46:40.389156 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-17 04:46:40.389168 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-17 04:46:40.389178 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-17 04:46:40.389189 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-17 04:46:40.389200 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-17 04:46:40.389210 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-17 04:46:40.389221 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-17 04:46:40.389231 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-17 04:46:40.389242 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-17 04:46:40.389252 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-17 04:46:40.389263 | orchestrator | 2026-04-17 04:46:40.389274 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-17 04:46:40.389285 | orchestrator | Friday 17 April 2026 04:46:19 +0000 (0:00:03.998) 0:00:56.585 ********** 2026-04-17 04:46:40.389296 | orchestrator | ok: [testbed-manager] 2026-04-17 04:46:40.389307 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:46:40.389317 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:46:40.389369 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:46:40.389382 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:46:40.389392 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:46:40.389412 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:46:40.389422 | orchestrator | 2026-04-17 04:46:40.389433 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-17 04:46:40.389444 | orchestrator | Friday 17 April 2026 04:46:20 +0000 (0:00:01.440) 0:00:58.026 ********** 2026-04-17 04:46:40.389455 | orchestrator | changed: [testbed-manager] 2026-04-17 04:46:40.389466 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:46:40.389477 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:46:40.389487 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:46:40.389498 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:46:40.389509 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:46:40.389519 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:46:40.389530 | orchestrator | 2026-04-17 04:46:40.389541 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-17 04:46:40.389552 | orchestrator | Friday 17 April 2026 04:46:22 +0000 (0:00:01.419) 0:00:59.446 ********** 2026-04-17 04:46:40.389562 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:46:40.389573 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:46:40.389584 | orchestrator | ok: [testbed-manager] 2026-04-17 04:46:40.389594 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:46:40.389605 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:46:40.389616 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:46:40.389626 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:46:40.389637 | orchestrator | 2026-04-17 04:46:40.389648 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-17 04:46:40.389658 | orchestrator | Friday 17 April 2026 04:46:23 +0000 (0:00:01.373) 0:01:00.819 ********** 2026-04-17 04:46:40.389669 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:46:40.389680 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:46:40.389690 | orchestrator | ok: [testbed-manager] 2026-04-17 04:46:40.389701 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:46:40.389712 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:46:40.389722 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:46:40.389733 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:46:40.389743 | orchestrator | 2026-04-17 04:46:40.389754 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-17 04:46:40.389765 | orchestrator | Friday 17 April 2026 04:46:25 +0000 (0:00:01.727) 0:01:02.547 ********** 2026-04-17 04:46:40.389776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-17 04:46:40.389795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:46:40.389806 | orchestrator | 2026-04-17 04:46:40.389817 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-17 04:46:40.389827 | orchestrator | Friday 17 April 2026 04:46:26 +0000 (0:00:01.510) 0:01:04.058 ********** 2026-04-17 04:46:40.389838 | orchestrator | changed: [testbed-manager] 2026-04-17 04:46:40.389849 | orchestrator | 2026-04-17 04:46:40.389860 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-17 04:46:40.389871 | orchestrator | Friday 17 April 2026 04:46:29 +0000 (0:00:02.310) 0:01:06.369 ********** 2026-04-17 04:46:40.389881 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:46:40.389892 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:46:40.389903 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:46:40.389914 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:46:40.389924 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:46:40.389935 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:46:40.389945 | orchestrator | changed: [testbed-manager] 2026-04-17 04:46:40.389956 | orchestrator | 2026-04-17 04:46:40.389967 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:46:40.389978 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:46:40.389996 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:46:40.390007 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:46:40.390081 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:46:40.390102 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:46:40.944841 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:46:40.944946 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:46:40.944962 | orchestrator | 2026-04-17 04:46:40.944974 | orchestrator | 2026-04-17 04:46:40.944985 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:46:40.944998 | orchestrator | Friday 17 April 2026 04:46:40 +0000 (0:00:11.330) 0:01:17.699 ********** 2026-04-17 04:46:40.945009 | orchestrator | =============================================================================== 2026-04-17 04:46:40.945020 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.76s 2026-04-17 04:46:40.945031 | orchestrator | osism.services.netdata : Add repository -------------------------------- 17.75s 2026-04-17 04:46:40.945041 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.33s 2026-04-17 04:46:40.945052 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.00s 2026-04-17 04:46:40.945062 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.31s 2026-04-17 04:46:40.945073 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.29s 2026-04-17 04:46:40.945084 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.95s 2026-04-17 04:46:40.945094 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.73s 2026-04-17 04:46:40.945105 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.69s 2026-04-17 04:46:40.945115 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.55s 2026-04-17 04:46:40.945126 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.51s 2026-04-17 04:46:40.945136 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.44s 2026-04-17 04:46:40.945148 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.42s 2026-04-17 04:46:40.945158 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.41s 2026-04-17 04:46:40.945169 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.37s 2026-04-17 04:46:40.945179 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.95s 2026-04-17 04:46:43.671390 | orchestrator | 2026-04-17 04:46:43 | INFO  | Task e0ea3227-97d6-4e02-9c21-a43796347397 (prometheus) was prepared for execution. 2026-04-17 04:46:43.671605 | orchestrator | 2026-04-17 04:46:43 | INFO  | It takes a moment until task e0ea3227-97d6-4e02-9c21-a43796347397 (prometheus) has been started and output is visible here. 2026-04-17 04:46:53.481536 | orchestrator | 2026-04-17 04:46:53.481650 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:46:53.481666 | orchestrator | 2026-04-17 04:46:53.481678 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:46:53.481689 | orchestrator | Friday 17 April 2026 04:46:48 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-04-17 04:46:53.481700 | orchestrator | ok: [testbed-manager] 2026-04-17 04:46:53.481738 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:46:53.481749 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:46:53.481760 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:46:53.481770 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:46:53.481781 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:46:53.481806 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:46:53.481819 | orchestrator | 2026-04-17 04:46:53.481830 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:46:53.481841 | orchestrator | Friday 17 April 2026 04:46:48 +0000 (0:00:00.944) 0:00:01.230 ********** 2026-04-17 04:46:53.481853 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-17 04:46:53.481865 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-17 04:46:53.481876 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-17 04:46:53.481886 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-17 04:46:53.481897 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-17 04:46:53.481908 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-17 04:46:53.481918 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-17 04:46:53.481929 | orchestrator | 2026-04-17 04:46:53.481940 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-17 04:46:53.481950 | orchestrator | 2026-04-17 04:46:53.481961 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-17 04:46:53.481972 | orchestrator | Friday 17 April 2026 04:46:49 +0000 (0:00:00.993) 0:00:02.224 ********** 2026-04-17 04:46:53.481983 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:46:53.481995 | orchestrator | 2026-04-17 04:46:53.482008 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-17 04:46:53.482093 | orchestrator | Friday 17 April 2026 04:46:51 +0000 (0:00:01.483) 0:00:03.707 ********** 2026-04-17 04:46:53.482117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:46:53.482142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:46:53.482164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 04:46:53.482186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:53.482233 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:46:53.482255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:53.482268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:46:53.482281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:46:53.482294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:46:53.482309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:53.482322 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:46:53.482350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:54.336178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:54.336282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:46:54.336300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:46:54.336313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:46:54.336325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:46:54.336336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:46:54.336450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:54.336512 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 04:46:54.336536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:46:54.336555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:46:54.336572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:46:54.336590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:54.336609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:54.336642 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:46:54.336674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:00.045519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:47:00.045627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:00.045644 | orchestrator | 2026-04-17 04:47:00.045658 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-17 04:47:00.045677 | orchestrator | Friday 17 April 2026 04:46:54 +0000 (0:00:02.881) 0:00:06.589 ********** 2026-04-17 04:47:00.045694 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 04:47:00.045712 | orchestrator | 2026-04-17 04:47:00.045729 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-17 04:47:00.045745 | orchestrator | Friday 17 April 2026 04:46:56 +0000 (0:00:01.800) 0:00:08.390 ********** 2026-04-17 04:47:00.045761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:00.045778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:00.045823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:00.045843 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 04:47:00.045894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:00.045914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:00.045930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:00.045948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:00.045966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:00.045995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:00.046013 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:00.046100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:00.046140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:02.112576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:02.112623 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:02.112677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112766 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 04:47:02.112788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:02.112812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:02.112825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:02.112845 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:03.398458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:03.398570 | orchestrator | 2026-04-17 04:47:03.398597 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-17 04:47:03.398646 | orchestrator | Friday 17 April 2026 04:47:02 +0000 (0:00:05.975) 0:00:14.365 ********** 2026-04-17 04:47:03.398668 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 04:47:03.398687 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:03.398706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:03.398776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 04:47:03.398822 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.398842 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:47:03.398861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:03.398893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.398910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:03.398927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.398945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.398964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.398989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:03.399012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:03.621181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.621284 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:47:03.621302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.621315 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:47:03.621327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:03.621339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.621351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.621462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:03.621479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:03.621511 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:47:03.621545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:03.621557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:03.621570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 04:47:03.621589 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:47:03.621608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:03.621628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:03.621648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 04:47:03.621668 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:47:03.621709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:03.621751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:04.640974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 04:47:04.641076 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:47:04.641094 | orchestrator | 2026-04-17 04:47:04.641107 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-17 04:47:04.641120 | orchestrator | Friday 17 April 2026 04:47:03 +0000 (0:00:01.508) 0:00:15.874 ********** 2026-04-17 04:47:04.641133 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 04:47:04.641146 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:04.641159 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:04.641191 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 04:47:04.641245 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:04.641259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:04.641270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:04.641281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:04.641293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:04.641305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:04.641316 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:47:04.641333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:04.641353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:04.641373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:05.862557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:05.862666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:05.862682 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:47:05.862696 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:47:05.862709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:05.862721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:05.862732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 04:47:05.862765 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:47:05.862792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:05.862835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:05.862865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:05.862878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:05.862890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 04:47:05.862901 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:47:05.862913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:05.862925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:05.862949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 04:47:05.862961 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:47:05.862973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 04:47:05.862991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 04:47:09.700493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 04:47:09.700581 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:47:09.700594 | orchestrator | 2026-04-17 04:47:09.700604 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-17 04:47:09.700613 | orchestrator | Friday 17 April 2026 04:47:05 +0000 (0:00:02.232) 0:00:18.107 ********** 2026-04-17 04:47:09.700622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:09.700632 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 04:47:09.700663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:09.700685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:09.700694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:09.700715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:09.700724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:09.700733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:09.700742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:09.700750 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:47:09.700765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:09.700784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:09.700793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:09.700809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:11.938500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:11.938604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:11.938617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:11.938647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:47:11.938658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:11.938681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:47:11.938691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:47:11.938716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:11.938726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:11.938737 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 04:47:11.938755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:47:11.938768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:11.938778 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:11.938787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:11.938803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:47:16.094670 | orchestrator | 2026-04-17 04:47:16.094793 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-17 04:47:16.094810 | orchestrator | Friday 17 April 2026 04:47:11 +0000 (0:00:06.081) 0:00:24.188 ********** 2026-04-17 04:47:16.094823 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 04:47:16.094835 | orchestrator | 2026-04-17 04:47:16.094847 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-17 04:47:16.094871 | orchestrator | Friday 17 April 2026 04:47:12 +0000 (0:00:00.970) 0:00:25.159 ********** 2026-04-17 04:47:16.094919 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093007, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.427518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.094937 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093007, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.427518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.094949 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093007, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.427518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:16.094976 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093052, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4329817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.094989 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093007, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.427518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093007, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.427518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095031 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093007, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.427518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095051 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093052, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4329817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095063 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093007, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.427518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095074 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093052, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4329817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095091 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092987, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4269245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095103 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092987, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4269245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095115 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093052, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4329817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:16.095134 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093030, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.430778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812723 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093052, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4329817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812836 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093052, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4329817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812853 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092987, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4269245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812883 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092987, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4269245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812896 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092987, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4269245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812908 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092982, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812920 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093030, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.430778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812969 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092987, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4269245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093030, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.430778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.812994 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093030, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.430778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.813011 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093052, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4329817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:17.813023 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093030, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.430778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.813034 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093010, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4276376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.813053 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093030, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.430778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:17.813073 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092982, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.218980 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092982, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219084 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092982, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219116 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092982, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219130 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092982, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219141 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093026, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4303126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219173 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093010, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4276376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219186 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093010, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4276376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219214 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093010, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4276376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219227 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092987, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4269245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:19.219244 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093010, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4276376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219256 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093010, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4276376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219268 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4282389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219286 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093026, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4303126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219298 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093026, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4303126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:19.219316 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093026, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4303126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715780 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093026, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4303126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715866 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093026, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4303126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715874 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093004, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4272017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715894 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4282389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4282389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715902 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4282389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715907 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4282389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4282389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715929 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093030, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.430778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:20.715933 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093004, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4272017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715942 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093004, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4272017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715946 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093048, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4325655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715950 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093004, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4272017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715954 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093048, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4325655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:20.715963 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093004, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4272017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286461 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093004, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4272017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286576 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093048, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4325655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286614 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092973, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4247653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286626 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093048, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4325655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286638 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092973, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4247653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286649 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093048, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4325655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286661 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093048, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4325655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286699 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092973, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4247653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286712 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093073, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286733 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093073, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286744 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092982, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:22.286755 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092973, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4247653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286767 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093044, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4322495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286780 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092973, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4247653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:22.286799 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092973, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4247653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813390 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093044, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4322495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813559 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093073, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813578 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093073, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813591 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093073, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813602 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092985, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813614 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093073, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813625 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093044, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4322495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813683 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092985, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813697 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093044, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4322495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813709 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093044, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4322495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813721 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093010, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4276376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:23.813732 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093044, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4322495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813744 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092976, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4249547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813755 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092985, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:23.813788 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092985, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075267 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092985, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075367 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092976, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4249547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075383 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092985, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075397 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092976, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4249547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075409 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093024, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.429072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075513 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092976, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4249547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075542 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093024, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.429072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075573 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093024, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.429072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075585 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092976, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4249547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075598 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092976, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4249547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075609 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093024, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.429072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075621 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093019, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.428493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075639 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093026, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4303126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:25.075656 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093024, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.429072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:25.075677 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093019, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.428493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148157 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093071, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148274 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:47:33.148292 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093024, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.429072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093019, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.428493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148316 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093019, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.428493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148351 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093019, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.428493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148380 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093071, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148392 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:47:33.148423 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093071, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148503 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:47:33.148517 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093019, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.428493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148529 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093071, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148540 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:47:33.148552 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093071, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148572 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:47:33.148583 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093071, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 04:47:33.148594 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:47:33.148611 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093018, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4282389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:33.148623 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093004, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4272017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:33.148642 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093048, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4325655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681047 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092973, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4247653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681165 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093073, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681180 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093044, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4322495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681215 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092985, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4254518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681227 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092976, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4249547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681252 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093024, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.429072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681263 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093019, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.428493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681290 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093071, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4374583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 04:47:58.681302 | orchestrator | 2026-04-17 04:47:58.681314 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-17 04:47:58.681326 | orchestrator | Friday 17 April 2026 04:47:38 +0000 (0:00:25.407) 0:00:50.567 ********** 2026-04-17 04:47:58.681336 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 04:47:58.681347 | orchestrator | 2026-04-17 04:47:58.681357 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-17 04:47:58.681367 | orchestrator | Friday 17 April 2026 04:47:39 +0000 (0:00:00.787) 0:00:51.355 ********** 2026-04-17 04:47:58.681384 | orchestrator | [WARNING]: Skipped 2026-04-17 04:47:58.681396 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681407 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-17 04:47:58.681416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681426 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-17 04:47:58.681436 | orchestrator | [WARNING]: Skipped 2026-04-17 04:47:58.681446 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681455 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-17 04:47:58.681465 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681474 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-17 04:47:58.681484 | orchestrator | [WARNING]: Skipped 2026-04-17 04:47:58.681525 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681535 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-17 04:47:58.681544 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681554 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-17 04:47:58.681564 | orchestrator | [WARNING]: Skipped 2026-04-17 04:47:58.681574 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681586 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-17 04:47:58.681597 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681608 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-17 04:47:58.681619 | orchestrator | [WARNING]: Skipped 2026-04-17 04:47:58.681630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681641 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-17 04:47:58.681652 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681663 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-17 04:47:58.681675 | orchestrator | [WARNING]: Skipped 2026-04-17 04:47:58.681686 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681698 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-17 04:47:58.681708 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681719 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-17 04:47:58.681731 | orchestrator | [WARNING]: Skipped 2026-04-17 04:47:58.681747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681759 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-17 04:47:58.681771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 04:47:58.681782 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-17 04:47:58.681794 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 04:47:58.681805 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:47:58.681816 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 04:47:58.681827 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 04:47:58.681838 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 04:47:58.681848 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 04:47:58.681919 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 04:47:58.681932 | orchestrator | 2026-04-17 04:47:58.681941 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-17 04:47:58.681951 | orchestrator | Friday 17 April 2026 04:47:41 +0000 (0:00:01.945) 0:00:53.300 ********** 2026-04-17 04:47:58.681961 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 04:47:58.681979 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 04:47:58.681990 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:47:58.681999 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:47:58.682009 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 04:47:58.682076 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:47:58.682096 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 04:48:16.584759 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.584869 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 04:48:16.584887 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.584899 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 04:48:16.584910 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.584922 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-17 04:48:16.584933 | orchestrator | 2026-04-17 04:48:16.584945 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-17 04:48:16.584956 | orchestrator | Friday 17 April 2026 04:47:58 +0000 (0:00:17.632) 0:01:10.933 ********** 2026-04-17 04:48:16.584967 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 04:48:16.584978 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:48:16.584989 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 04:48:16.584999 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:48:16.585010 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 04:48:16.585021 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:48:16.585032 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 04:48:16.585043 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.585054 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 04:48:16.585064 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.585076 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 04:48:16.585086 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.585097 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-17 04:48:16.585108 | orchestrator | 2026-04-17 04:48:16.585119 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-17 04:48:16.585130 | orchestrator | Friday 17 April 2026 04:48:01 +0000 (0:00:02.945) 0:01:13.878 ********** 2026-04-17 04:48:16.585141 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 04:48:16.585153 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:48:16.585164 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 04:48:16.585175 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:48:16.585187 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 04:48:16.585199 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:48:16.585212 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 04:48:16.585224 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.585260 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 04:48:16.585273 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.585286 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-17 04:48:16.585313 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 04:48:16.585326 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.585338 | orchestrator | 2026-04-17 04:48:16.585351 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-17 04:48:16.585363 | orchestrator | Friday 17 April 2026 04:48:03 +0000 (0:00:02.029) 0:01:15.908 ********** 2026-04-17 04:48:16.585376 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 04:48:16.585388 | orchestrator | 2026-04-17 04:48:16.585401 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-17 04:48:16.585413 | orchestrator | Friday 17 April 2026 04:48:04 +0000 (0:00:00.792) 0:01:16.701 ********** 2026-04-17 04:48:16.585426 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:48:16.585438 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:48:16.585450 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:48:16.585463 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:48:16.585477 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.585496 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.585515 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.585533 | orchestrator | 2026-04-17 04:48:16.585575 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-17 04:48:16.585594 | orchestrator | Friday 17 April 2026 04:48:05 +0000 (0:00:00.854) 0:01:17.555 ********** 2026-04-17 04:48:16.585611 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:48:16.585628 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.585647 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.585666 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.585684 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:48:16.585702 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:48:16.585721 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:48:16.585739 | orchestrator | 2026-04-17 04:48:16.585758 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-17 04:48:16.585791 | orchestrator | Friday 17 April 2026 04:48:07 +0000 (0:00:02.396) 0:01:19.952 ********** 2026-04-17 04:48:16.585802 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 04:48:16.585813 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 04:48:16.585824 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:48:16.585836 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 04:48:16.585856 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 04:48:16.585874 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:48:16.585892 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:48:16.585910 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:48:16.585930 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 04:48:16.585949 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.585968 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 04:48:16.585987 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.586003 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 04:48:16.586073 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.586088 | orchestrator | 2026-04-17 04:48:16.586099 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-17 04:48:16.586124 | orchestrator | Friday 17 April 2026 04:48:09 +0000 (0:00:01.550) 0:01:21.502 ********** 2026-04-17 04:48:16.586135 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 04:48:16.586146 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 04:48:16.586157 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:48:16.586168 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:48:16.586178 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 04:48:16.586189 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:48:16.586200 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 04:48:16.586210 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.586221 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 04:48:16.586232 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.586243 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 04:48:16.586254 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.586265 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-17 04:48:16.586276 | orchestrator | 2026-04-17 04:48:16.586286 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-17 04:48:16.586297 | orchestrator | Friday 17 April 2026 04:48:10 +0000 (0:00:01.542) 0:01:23.044 ********** 2026-04-17 04:48:16.586308 | orchestrator | [WARNING]: Skipped 2026-04-17 04:48:16.586321 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-17 04:48:16.586331 | orchestrator | due to this access issue: 2026-04-17 04:48:16.586342 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-17 04:48:16.586353 | orchestrator | not a directory 2026-04-17 04:48:16.586364 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 04:48:16.586375 | orchestrator | 2026-04-17 04:48:16.586386 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-17 04:48:16.586404 | orchestrator | Friday 17 April 2026 04:48:11 +0000 (0:00:01.177) 0:01:24.223 ********** 2026-04-17 04:48:16.586415 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:48:16.586426 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:48:16.586437 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:48:16.586448 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:48:16.586459 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.586470 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.586481 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.586491 | orchestrator | 2026-04-17 04:48:16.586502 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-17 04:48:16.586513 | orchestrator | Friday 17 April 2026 04:48:13 +0000 (0:00:01.050) 0:01:25.275 ********** 2026-04-17 04:48:16.586524 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:48:16.586535 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:48:16.586594 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:48:16.586607 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:48:16.586617 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:48:16.586628 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:48:16.586639 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:48:16.586649 | orchestrator | 2026-04-17 04:48:16.586660 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-17 04:48:16.586671 | orchestrator | Friday 17 April 2026 04:48:14 +0000 (0:00:01.023) 0:01:26.299 ********** 2026-04-17 04:48:16.586697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:48:17.957750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:48:17.957856 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 04:48:17.957872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:48:17.957886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:48:17.957924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:48:17.957946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:48:17.957995 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 04:48:17.958111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:17.958135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:17.958147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:17.958194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:48:17.958207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:48:17.958226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:48:17.958238 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:48:17.958269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:20.066692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:20.066796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:48:20.066811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:20.066823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:48:20.066836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 04:48:20.066865 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 04:48:20.066922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:48:20.066937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:48:20.066949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 04:48:20.066967 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:20.066987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:20.067014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:20.067046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 04:48:20.067061 | orchestrator | 2026-04-17 04:48:20.067074 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-17 04:48:20.067086 | orchestrator | Friday 17 April 2026 04:48:17 +0000 (0:00:03.916) 0:01:30.215 ********** 2026-04-17 04:48:20.067098 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-17 04:48:20.067109 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:48:20.067121 | orchestrator | 2026-04-17 04:48:20.067132 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 04:48:20.067143 | orchestrator | Friday 17 April 2026 04:48:19 +0000 (0:00:01.522) 0:01:31.737 ********** 2026-04-17 04:48:20.067154 | orchestrator | 2026-04-17 04:48:20.067165 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 04:48:20.067176 | orchestrator | Friday 17 April 2026 04:48:19 +0000 (0:00:00.078) 0:01:31.816 ********** 2026-04-17 04:48:20.067187 | orchestrator | 2026-04-17 04:48:20.067214 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 04:48:20.067226 | orchestrator | Friday 17 April 2026 04:48:19 +0000 (0:00:00.076) 0:01:31.892 ********** 2026-04-17 04:48:20.067237 | orchestrator | 2026-04-17 04:48:20.067259 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 04:48:20.067279 | orchestrator | Friday 17 April 2026 04:48:19 +0000 (0:00:00.098) 0:01:31.991 ********** 2026-04-17 04:50:10.356904 | orchestrator | 2026-04-17 04:50:10.357009 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 04:50:10.357028 | orchestrator | Friday 17 April 2026 04:48:19 +0000 (0:00:00.072) 0:01:32.064 ********** 2026-04-17 04:50:10.357040 | orchestrator | 2026-04-17 04:50:10.357051 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 04:50:10.357062 | orchestrator | Friday 17 April 2026 04:48:19 +0000 (0:00:00.065) 0:01:32.129 ********** 2026-04-17 04:50:10.357073 | orchestrator | 2026-04-17 04:50:10.357084 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 04:50:10.357095 | orchestrator | Friday 17 April 2026 04:48:19 +0000 (0:00:00.068) 0:01:32.197 ********** 2026-04-17 04:50:10.357105 | orchestrator | 2026-04-17 04:50:10.357116 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-17 04:50:10.357127 | orchestrator | Friday 17 April 2026 04:48:20 +0000 (0:00:00.114) 0:01:32.312 ********** 2026-04-17 04:50:10.357137 | orchestrator | changed: [testbed-manager] 2026-04-17 04:50:10.357149 | orchestrator | 2026-04-17 04:50:10.357160 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-17 04:50:10.357171 | orchestrator | Friday 17 April 2026 04:48:42 +0000 (0:00:22.524) 0:01:54.837 ********** 2026-04-17 04:50:10.357182 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:50:10.357193 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:50:10.357204 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:50:10.357214 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:50:10.357225 | orchestrator | changed: [testbed-manager] 2026-04-17 04:50:10.357236 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:50:10.357246 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:50:10.357258 | orchestrator | 2026-04-17 04:50:10.357268 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-17 04:50:10.357279 | orchestrator | Friday 17 April 2026 04:48:56 +0000 (0:00:14.219) 0:02:09.056 ********** 2026-04-17 04:50:10.357290 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:50:10.357301 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:50:10.357332 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:50:10.357344 | orchestrator | 2026-04-17 04:50:10.357355 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-17 04:50:10.357366 | orchestrator | Friday 17 April 2026 04:49:07 +0000 (0:00:10.739) 0:02:19.795 ********** 2026-04-17 04:50:10.357377 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:50:10.357388 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:50:10.357398 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:50:10.357409 | orchestrator | 2026-04-17 04:50:10.357422 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-17 04:50:10.357434 | orchestrator | Friday 17 April 2026 04:49:18 +0000 (0:00:10.807) 0:02:30.603 ********** 2026-04-17 04:50:10.357446 | orchestrator | changed: [testbed-manager] 2026-04-17 04:50:10.357459 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:50:10.357471 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:50:10.357484 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:50:10.357497 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:50:10.357509 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:50:10.357521 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:50:10.357532 | orchestrator | 2026-04-17 04:50:10.357544 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-17 04:50:10.357556 | orchestrator | Friday 17 April 2026 04:49:33 +0000 (0:00:14.726) 0:02:45.330 ********** 2026-04-17 04:50:10.357568 | orchestrator | changed: [testbed-manager] 2026-04-17 04:50:10.357580 | orchestrator | 2026-04-17 04:50:10.357593 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-17 04:50:10.357605 | orchestrator | Friday 17 April 2026 04:49:42 +0000 (0:00:08.980) 0:02:54.310 ********** 2026-04-17 04:50:10.357618 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:50:10.357642 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:50:10.357654 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:50:10.357667 | orchestrator | 2026-04-17 04:50:10.357679 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-17 04:50:10.357692 | orchestrator | Friday 17 April 2026 04:49:48 +0000 (0:00:06.067) 0:03:00.377 ********** 2026-04-17 04:50:10.357704 | orchestrator | changed: [testbed-manager] 2026-04-17 04:50:10.357716 | orchestrator | 2026-04-17 04:50:10.357728 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-17 04:50:10.357741 | orchestrator | Friday 17 April 2026 04:49:59 +0000 (0:00:11.061) 0:03:11.439 ********** 2026-04-17 04:50:10.357753 | orchestrator | changed: [testbed-node-3] 2026-04-17 04:50:10.357766 | orchestrator | changed: [testbed-node-4] 2026-04-17 04:50:10.357777 | orchestrator | changed: [testbed-node-5] 2026-04-17 04:50:10.357812 | orchestrator | 2026-04-17 04:50:10.357824 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:50:10.357836 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-17 04:50:10.357848 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 04:50:10.357858 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 04:50:10.357869 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 04:50:10.357880 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 04:50:10.357906 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 04:50:10.357927 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 04:50:10.357938 | orchestrator | 2026-04-17 04:50:10.357956 | orchestrator | 2026-04-17 04:50:10.357975 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:50:10.358002 | orchestrator | Friday 17 April 2026 04:50:09 +0000 (0:00:10.586) 0:03:22.026 ********** 2026-04-17 04:50:10.358093 | orchestrator | =============================================================================== 2026-04-17 04:50:10.358116 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.41s 2026-04-17 04:50:10.358137 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.52s 2026-04-17 04:50:10.358155 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.63s 2026-04-17 04:50:10.358176 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.73s 2026-04-17 04:50:10.358194 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.22s 2026-04-17 04:50:10.358212 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 11.06s 2026-04-17 04:50:10.358231 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.81s 2026-04-17 04:50:10.358250 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.74s 2026-04-17 04:50:10.358269 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.59s 2026-04-17 04:50:10.358289 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.98s 2026-04-17 04:50:10.358308 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.08s 2026-04-17 04:50:10.358326 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.07s 2026-04-17 04:50:10.358338 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.98s 2026-04-17 04:50:10.358348 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.92s 2026-04-17 04:50:10.358359 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.95s 2026-04-17 04:50:10.358370 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.88s 2026-04-17 04:50:10.358380 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.40s 2026-04-17 04:50:10.358391 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.23s 2026-04-17 04:50:10.358401 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.03s 2026-04-17 04:50:10.358412 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.95s 2026-04-17 04:50:12.958481 | orchestrator | 2026-04-17 04:50:12 | INFO  | Task ef12487b-ccf0-4277-9063-d312fa68ab77 (grafana) was prepared for execution. 2026-04-17 04:50:12.958580 | orchestrator | 2026-04-17 04:50:12 | INFO  | It takes a moment until task ef12487b-ccf0-4277-9063-d312fa68ab77 (grafana) has been started and output is visible here. 2026-04-17 04:50:23.026711 | orchestrator | 2026-04-17 04:50:23.026842 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:50:23.026858 | orchestrator | 2026-04-17 04:50:23.026868 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:50:23.026892 | orchestrator | Friday 17 April 2026 04:50:17 +0000 (0:00:00.275) 0:00:00.275 ********** 2026-04-17 04:50:23.026902 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:50:23.026912 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:50:23.026921 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:50:23.026930 | orchestrator | 2026-04-17 04:50:23.026939 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:50:23.026947 | orchestrator | Friday 17 April 2026 04:50:17 +0000 (0:00:00.352) 0:00:00.628 ********** 2026-04-17 04:50:23.026956 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-17 04:50:23.026965 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-17 04:50:23.026992 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-17 04:50:23.027002 | orchestrator | 2026-04-17 04:50:23.027010 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-17 04:50:23.027019 | orchestrator | 2026-04-17 04:50:23.027027 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-17 04:50:23.027036 | orchestrator | Friday 17 April 2026 04:50:18 +0000 (0:00:00.471) 0:00:01.099 ********** 2026-04-17 04:50:23.027045 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:50:23.027054 | orchestrator | 2026-04-17 04:50:23.027063 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-17 04:50:23.027071 | orchestrator | Friday 17 April 2026 04:50:18 +0000 (0:00:00.595) 0:00:01.694 ********** 2026-04-17 04:50:23.027083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:23.027149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:23.027162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:23.027171 | orchestrator | 2026-04-17 04:50:23.027180 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-17 04:50:23.027188 | orchestrator | Friday 17 April 2026 04:50:19 +0000 (0:00:00.934) 0:00:02.629 ********** 2026-04-17 04:50:23.027197 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-17 04:50:23.027207 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-17 04:50:23.027216 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:50:23.027225 | orchestrator | 2026-04-17 04:50:23.027233 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-17 04:50:23.027242 | orchestrator | Friday 17 April 2026 04:50:20 +0000 (0:00:00.863) 0:00:03.492 ********** 2026-04-17 04:50:23.027251 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:50:23.027261 | orchestrator | 2026-04-17 04:50:23.027271 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-17 04:50:23.027289 | orchestrator | Friday 17 April 2026 04:50:21 +0000 (0:00:00.566) 0:00:04.059 ********** 2026-04-17 04:50:23.027322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:23.027334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:23.027346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:23.027356 | orchestrator | 2026-04-17 04:50:23.027366 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-17 04:50:23.027406 | orchestrator | Friday 17 April 2026 04:50:22 +0000 (0:00:01.289) 0:00:05.348 ********** 2026-04-17 04:50:23.027418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 04:50:23.027429 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:50:23.027439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 04:50:23.027450 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:50:23.027480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 04:50:30.012365 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:50:30.012478 | orchestrator | 2026-04-17 04:50:30.012495 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-17 04:50:30.012509 | orchestrator | Friday 17 April 2026 04:50:23 +0000 (0:00:00.660) 0:00:06.009 ********** 2026-04-17 04:50:30.012523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 04:50:30.012537 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:50:30.012550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 04:50:30.012562 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:50:30.012574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 04:50:30.012586 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:50:30.012598 | orchestrator | 2026-04-17 04:50:30.012609 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-17 04:50:30.012620 | orchestrator | Friday 17 April 2026 04:50:23 +0000 (0:00:00.626) 0:00:06.636 ********** 2026-04-17 04:50:30.012632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:30.012666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:30.012711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:30.012724 | orchestrator | 2026-04-17 04:50:30.012736 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-17 04:50:30.012747 | orchestrator | Friday 17 April 2026 04:50:24 +0000 (0:00:01.223) 0:00:07.860 ********** 2026-04-17 04:50:30.012758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:30.012770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:30.012781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:50:30.012793 | orchestrator | 2026-04-17 04:50:30.012812 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-17 04:50:30.012883 | orchestrator | Friday 17 April 2026 04:50:26 +0000 (0:00:01.697) 0:00:09.557 ********** 2026-04-17 04:50:30.012896 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:50:30.012910 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:50:30.012923 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:50:30.012942 | orchestrator | 2026-04-17 04:50:30.012961 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-17 04:50:30.012980 | orchestrator | Friday 17 April 2026 04:50:26 +0000 (0:00:00.343) 0:00:09.900 ********** 2026-04-17 04:50:30.012998 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 04:50:30.013017 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 04:50:30.013036 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 04:50:30.013053 | orchestrator | 2026-04-17 04:50:30.013072 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-17 04:50:30.013092 | orchestrator | Friday 17 April 2026 04:50:28 +0000 (0:00:01.247) 0:00:11.148 ********** 2026-04-17 04:50:30.013111 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 04:50:30.013131 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 04:50:30.013151 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 04:50:30.013171 | orchestrator | 2026-04-17 04:50:30.013200 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-17 04:50:30.013227 | orchestrator | Friday 17 April 2026 04:50:29 +0000 (0:00:01.839) 0:00:12.988 ********** 2026-04-17 04:50:36.390821 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:50:36.390991 | orchestrator | 2026-04-17 04:50:36.391019 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-17 04:50:36.391041 | orchestrator | Friday 17 April 2026 04:50:30 +0000 (0:00:00.777) 0:00:13.765 ********** 2026-04-17 04:50:36.391059 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-17 04:50:36.391080 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-17 04:50:36.391100 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:50:36.391121 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:50:36.391135 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:50:36.391146 | orchestrator | 2026-04-17 04:50:36.391157 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-17 04:50:36.391168 | orchestrator | Friday 17 April 2026 04:50:31 +0000 (0:00:00.707) 0:00:14.472 ********** 2026-04-17 04:50:36.391179 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:50:36.391190 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:50:36.391201 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:50:36.391211 | orchestrator | 2026-04-17 04:50:36.391222 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-17 04:50:36.391233 | orchestrator | Friday 17 April 2026 04:50:31 +0000 (0:00:00.361) 0:00:14.834 ********** 2026-04-17 04:50:36.391248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092665, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3715572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092665, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3715572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092665, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3715572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092746, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3882387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092746, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3882387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092746, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3882387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092683, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.374409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092683, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.374409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1092683, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.374409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092748, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3898153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092748, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3898153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:36.391463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092748, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3898153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092702, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3783112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092702, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3783112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1092702, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3783112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092721, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3854141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092721, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3854141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1092721, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3854141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092662, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3698533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092662, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3698533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092662, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3698533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092674, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3732178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092674, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3732178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092674, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3732178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:40.195932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092686, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3747764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092686, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3747764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1092686, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3747764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092710, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3794556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092710, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3794556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1092710, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3794556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092741, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092741, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092741, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3878353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092679, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3739858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092679, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3739858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092679, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3739858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092718, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3826275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:44.078824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092718, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3826275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1092718, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3826275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092704, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3794556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092704, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3794556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1092704, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3794556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092696, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3780644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092696, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3780644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1092696, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3780644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092694, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3768013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092694, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3768013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1092694, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3768013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092714, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.381695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092714, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.381695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:47.787777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1092714, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.381695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092690, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3756866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092690, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3756866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1092690, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3756866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092731, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3862026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092731, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3862026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092731, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3862026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092950, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4233139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092950, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4233139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092950, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4233139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092790, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4006345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092790, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4006345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092790, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4006345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:51.760931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092769, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3931267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092769, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3931267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092769, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3931267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092819, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4032812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092819, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4032812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092819, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4032812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092759, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3900824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092759, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3900824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092759, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3900824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092882, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4132123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092882, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4132123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092882, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4132123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092822, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4105818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:55.644756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092822, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4105818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092822, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4105818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092889, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4140248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092889, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4140248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092889, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4140248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092943, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4220018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092943, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4220018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092943, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4220018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092876, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.412405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092876, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.412405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092876, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.412405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092811, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4020865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092811, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4020865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:50:59.608989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092811, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4020865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.307949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092787, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.395647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092787, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.395647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092787, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.395647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092807, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4017184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092807, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4017184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092807, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4017184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092782, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.39514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092782, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.39514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092782, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.39514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092813, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4029512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092813, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4029512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092813, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4029512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:03.308273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092913, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4215558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092913, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4215558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092913, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4215558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092898, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4177988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092898, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4177988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092898, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4177988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092765, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3910048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092765, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3910048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092765, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3910048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092767, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3910048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092767, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3910048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092767, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.3910048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092870, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4114974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:51:07.198509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092870, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4114974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:53:03.532284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092870, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.4114974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:53:03.532423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092894, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.414452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:53:03.532443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092894, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.414452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:53:03.532456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092894, 'dev': 106, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776394402.414452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 04:53:03.532467 | orchestrator | 2026-04-17 04:53:03.532480 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-17 04:53:03.532493 | orchestrator | Friday 17 April 2026 04:51:08 +0000 (0:00:36.601) 0:00:51.436 ********** 2026-04-17 04:53:03.532504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:53:03.532559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:53:03.532571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 04:53:03.532582 | orchestrator | 2026-04-17 04:53:03.532593 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-17 04:53:03.532604 | orchestrator | Friday 17 April 2026 04:51:09 +0000 (0:00:01.013) 0:00:52.449 ********** 2026-04-17 04:53:03.532615 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:53:03.532627 | orchestrator | 2026-04-17 04:53:03.532637 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-17 04:53:03.532647 | orchestrator | Friday 17 April 2026 04:51:11 +0000 (0:00:02.204) 0:00:54.654 ********** 2026-04-17 04:53:03.532656 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:53:03.532665 | orchestrator | 2026-04-17 04:53:03.532674 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 04:53:03.532689 | orchestrator | Friday 17 April 2026 04:51:13 +0000 (0:00:02.249) 0:00:56.904 ********** 2026-04-17 04:53:03.532700 | orchestrator | 2026-04-17 04:53:03.532710 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 04:53:03.532721 | orchestrator | Friday 17 April 2026 04:51:13 +0000 (0:00:00.071) 0:00:56.976 ********** 2026-04-17 04:53:03.532731 | orchestrator | 2026-04-17 04:53:03.532741 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 04:53:03.532751 | orchestrator | Friday 17 April 2026 04:51:14 +0000 (0:00:00.092) 0:00:57.068 ********** 2026-04-17 04:53:03.532762 | orchestrator | 2026-04-17 04:53:03.532772 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-17 04:53:03.532783 | orchestrator | Friday 17 April 2026 04:51:14 +0000 (0:00:00.077) 0:00:57.145 ********** 2026-04-17 04:53:03.532789 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:53:03.532797 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:53:03.532803 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:53:03.532810 | orchestrator | 2026-04-17 04:53:03.532818 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-17 04:53:03.532825 | orchestrator | Friday 17 April 2026 04:51:21 +0000 (0:00:07.112) 0:01:04.257 ********** 2026-04-17 04:53:03.532832 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:53:03.532839 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:53:03.532846 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-17 04:53:03.532855 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-17 04:53:03.532862 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-04-17 04:53:03.532879 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-04-17 04:53:03.532886 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:53:03.532894 | orchestrator | 2026-04-17 04:53:03.532901 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-17 04:53:03.532909 | orchestrator | Friday 17 April 2026 04:52:11 +0000 (0:00:49.833) 0:01:54.091 ********** 2026-04-17 04:53:03.532916 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:53:03.532923 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:53:03.532930 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:53:03.532937 | orchestrator | 2026-04-17 04:53:03.532944 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-17 04:53:03.532952 | orchestrator | Friday 17 April 2026 04:52:58 +0000 (0:00:47.325) 0:02:41.416 ********** 2026-04-17 04:53:03.532959 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:53:03.532966 | orchestrator | 2026-04-17 04:53:03.532973 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-17 04:53:03.532980 | orchestrator | Friday 17 April 2026 04:53:00 +0000 (0:00:02.176) 0:02:43.592 ********** 2026-04-17 04:53:03.532988 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:53:03.532995 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:53:03.533002 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:53:03.533009 | orchestrator | 2026-04-17 04:53:03.533016 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-17 04:53:03.533024 | orchestrator | Friday 17 April 2026 04:53:00 +0000 (0:00:00.339) 0:02:43.932 ********** 2026-04-17 04:53:03.533032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-17 04:53:03.533048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-17 04:53:04.203469 | orchestrator | 2026-04-17 04:53:04.203574 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-17 04:53:04.203590 | orchestrator | Friday 17 April 2026 04:53:03 +0000 (0:00:02.574) 0:02:46.506 ********** 2026-04-17 04:53:04.203601 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:53:04.203611 | orchestrator | 2026-04-17 04:53:04.203622 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:53:04.203632 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 04:53:04.203644 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 04:53:04.203654 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 04:53:04.203663 | orchestrator | 2026-04-17 04:53:04.203673 | orchestrator | 2026-04-17 04:53:04.203682 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:53:04.203692 | orchestrator | Friday 17 April 2026 04:53:03 +0000 (0:00:00.284) 0:02:46.791 ********** 2026-04-17 04:53:04.203701 | orchestrator | =============================================================================== 2026-04-17 04:53:04.203711 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 49.83s 2026-04-17 04:53:04.203739 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 47.33s 2026-04-17 04:53:04.203749 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.60s 2026-04-17 04:53:04.203784 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.11s 2026-04-17 04:53:04.203794 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.57s 2026-04-17 04:53:04.203804 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2026-04-17 04:53:04.203814 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.20s 2026-04-17 04:53:04.203823 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.18s 2026-04-17 04:53:04.203833 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.84s 2026-04-17 04:53:04.203842 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.70s 2026-04-17 04:53:04.203852 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-04-17 04:53:04.203861 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.25s 2026-04-17 04:53:04.203870 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.22s 2026-04-17 04:53:04.203880 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.01s 2026-04-17 04:53:04.203889 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.93s 2026-04-17 04:53:04.203898 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.86s 2026-04-17 04:53:04.203908 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.78s 2026-04-17 04:53:04.203917 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.71s 2026-04-17 04:53:04.203927 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.66s 2026-04-17 04:53:04.203937 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.63s 2026-04-17 04:53:04.552047 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-04-17 04:53:04.559348 | orchestrator | + set -e 2026-04-17 04:53:04.559893 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 04:53:04.559924 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 04:53:04.560016 | orchestrator | ++ INTERACTIVE=false 2026-04-17 04:53:04.560031 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 04:53:04.560043 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 04:53:04.560054 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 04:53:04.560064 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 04:53:04.560075 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 04:53:04.560086 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 04:53:04.560126 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 04:53:04.560139 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 04:53:04.560150 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 04:53:04.560161 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 04:53:04.560172 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 04:53:04.560183 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 04:53:04.560195 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 04:53:04.560206 | orchestrator | ++ export ARA=false 2026-04-17 04:53:04.560218 | orchestrator | ++ ARA=false 2026-04-17 04:53:04.560229 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 04:53:04.560239 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 04:53:04.560250 | orchestrator | ++ export TEMPEST=false 2026-04-17 04:53:04.560261 | orchestrator | ++ TEMPEST=false 2026-04-17 04:53:04.560271 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 04:53:04.560282 | orchestrator | ++ IS_ZUUL=true 2026-04-17 04:53:04.560293 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 04:53:04.560304 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 04:53:04.560315 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 04:53:04.560326 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 04:53:04.560337 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 04:53:04.560347 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 04:53:04.560358 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 04:53:04.560369 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 04:53:04.560380 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 04:53:04.560391 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 04:53:04.560408 | orchestrator | ++ semver 9.5.0 8.0.0 2026-04-17 04:53:04.612689 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 04:53:04.612779 | orchestrator | + osism apply clusterapi 2026-04-17 04:53:06.733156 | orchestrator | 2026-04-17 04:53:06 | INFO  | Task e7bf62a9-09a0-4072-afc4-cf852dd28138 (clusterapi) was prepared for execution. 2026-04-17 04:53:06.733235 | orchestrator | 2026-04-17 04:53:06 | INFO  | It takes a moment until task e7bf62a9-09a0-4072-afc4-cf852dd28138 (clusterapi) has been started and output is visible here. 2026-04-17 04:54:03.190357 | orchestrator | 2026-04-17 04:54:03.190474 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-17 04:54:03.190491 | orchestrator | 2026-04-17 04:54:03.190503 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-17 04:54:03.190514 | orchestrator | Friday 17 April 2026 04:53:11 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-04-17 04:54:03.190526 | orchestrator | included: cert_manager for testbed-manager 2026-04-17 04:54:03.190538 | orchestrator | 2026-04-17 04:54:03.190549 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-17 04:54:03.190560 | orchestrator | Friday 17 April 2026 04:53:11 +0000 (0:00:00.265) 0:00:00.483 ********** 2026-04-17 04:54:03.190571 | orchestrator | changed: [testbed-manager] 2026-04-17 04:54:03.190583 | orchestrator | 2026-04-17 04:54:03.190594 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-17 04:54:03.190605 | orchestrator | Friday 17 April 2026 04:53:17 +0000 (0:00:05.480) 0:00:05.964 ********** 2026-04-17 04:54:03.190616 | orchestrator | changed: [testbed-manager] 2026-04-17 04:54:03.190627 | orchestrator | 2026-04-17 04:54:03.190638 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-17 04:54:03.190667 | orchestrator | 2026-04-17 04:54:03.190678 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-17 04:54:03.190699 | orchestrator | Friday 17 April 2026 04:53:40 +0000 (0:00:23.808) 0:00:29.772 ********** 2026-04-17 04:54:03.190711 | orchestrator | ok: [testbed-manager] 2026-04-17 04:54:03.190722 | orchestrator | 2026-04-17 04:54:03.190734 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-17 04:54:03.190744 | orchestrator | Friday 17 April 2026 04:53:41 +0000 (0:00:01.114) 0:00:30.887 ********** 2026-04-17 04:54:03.190755 | orchestrator | ok: [testbed-manager] 2026-04-17 04:54:03.190766 | orchestrator | 2026-04-17 04:54:03.190793 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-17 04:54:03.190805 | orchestrator | Friday 17 April 2026 04:53:42 +0000 (0:00:00.184) 0:00:31.071 ********** 2026-04-17 04:54:03.190815 | orchestrator | ok: [testbed-manager] 2026-04-17 04:54:03.190827 | orchestrator | 2026-04-17 04:54:03.190838 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-17 04:54:03.190849 | orchestrator | Friday 17 April 2026 04:54:00 +0000 (0:00:18.102) 0:00:49.174 ********** 2026-04-17 04:54:03.190860 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:54:03.190871 | orchestrator | 2026-04-17 04:54:03.190882 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-17 04:54:03.190894 | orchestrator | Friday 17 April 2026 04:54:00 +0000 (0:00:00.154) 0:00:49.329 ********** 2026-04-17 04:54:03.190907 | orchestrator | changed: [testbed-manager] 2026-04-17 04:54:03.190920 | orchestrator | 2026-04-17 04:54:03.190933 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:54:03.190947 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 04:54:03.190960 | orchestrator | 2026-04-17 04:54:03.190973 | orchestrator | 2026-04-17 04:54:03.190986 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:54:03.190999 | orchestrator | Friday 17 April 2026 04:54:02 +0000 (0:00:02.336) 0:00:51.665 ********** 2026-04-17 04:54:03.191012 | orchestrator | =============================================================================== 2026-04-17 04:54:03.191025 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 23.81s 2026-04-17 04:54:03.191038 | orchestrator | Initialize the CAPI management cluster --------------------------------- 18.10s 2026-04-17 04:54:03.191072 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.48s 2026-04-17 04:54:03.191085 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.34s 2026-04-17 04:54:03.191098 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.11s 2026-04-17 04:54:03.191110 | orchestrator | Include cert_manager role ----------------------------------------------- 0.27s 2026-04-17 04:54:03.191123 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.18s 2026-04-17 04:54:03.191136 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.15s 2026-04-17 04:54:03.632407 | orchestrator | + osism apply magnum 2026-04-17 04:54:05.731804 | orchestrator | 2026-04-17 04:54:05 | INFO  | Task d4e3ed95-2138-4dce-9a78-5f812c4423dc (magnum) was prepared for execution. 2026-04-17 04:54:05.731912 | orchestrator | 2026-04-17 04:54:05 | INFO  | It takes a moment until task d4e3ed95-2138-4dce-9a78-5f812c4423dc (magnum) has been started and output is visible here. 2026-04-17 04:54:47.060875 | orchestrator | 2026-04-17 04:54:47.060995 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 04:54:47.061012 | orchestrator | 2026-04-17 04:54:47.061024 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 04:54:47.061036 | orchestrator | Friday 17 April 2026 04:54:10 +0000 (0:00:00.285) 0:00:00.285 ********** 2026-04-17 04:54:47.061047 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:54:47.061060 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:54:47.061071 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:54:47.061082 | orchestrator | 2026-04-17 04:54:47.061093 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 04:54:47.061104 | orchestrator | Friday 17 April 2026 04:54:10 +0000 (0:00:00.325) 0:00:00.611 ********** 2026-04-17 04:54:47.061115 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-17 04:54:47.061127 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-17 04:54:47.061137 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-17 04:54:47.061148 | orchestrator | 2026-04-17 04:54:47.061159 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-17 04:54:47.061170 | orchestrator | 2026-04-17 04:54:47.061181 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-17 04:54:47.061192 | orchestrator | Friday 17 April 2026 04:54:11 +0000 (0:00:00.519) 0:00:01.130 ********** 2026-04-17 04:54:47.061203 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:54:47.061214 | orchestrator | 2026-04-17 04:54:47.061225 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-17 04:54:47.061235 | orchestrator | Friday 17 April 2026 04:54:11 +0000 (0:00:00.597) 0:00:01.728 ********** 2026-04-17 04:54:47.061247 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-17 04:54:47.061257 | orchestrator | 2026-04-17 04:54:47.061268 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-17 04:54:47.061341 | orchestrator | Friday 17 April 2026 04:54:15 +0000 (0:00:03.398) 0:00:05.126 ********** 2026-04-17 04:54:47.061355 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-17 04:54:47.061367 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-17 04:54:47.061377 | orchestrator | 2026-04-17 04:54:47.061388 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-17 04:54:47.061399 | orchestrator | Friday 17 April 2026 04:54:21 +0000 (0:00:06.205) 0:00:11.332 ********** 2026-04-17 04:54:47.061411 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 04:54:47.061424 | orchestrator | 2026-04-17 04:54:47.061437 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-17 04:54:47.061474 | orchestrator | Friday 17 April 2026 04:54:24 +0000 (0:00:03.295) 0:00:14.628 ********** 2026-04-17 04:54:47.061502 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 04:54:47.061516 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-17 04:54:47.061529 | orchestrator | 2026-04-17 04:54:47.061542 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-17 04:54:47.061554 | orchestrator | Friday 17 April 2026 04:54:28 +0000 (0:00:03.793) 0:00:18.421 ********** 2026-04-17 04:54:47.061567 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 04:54:47.061579 | orchestrator | 2026-04-17 04:54:47.061592 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-17 04:54:47.061604 | orchestrator | Friday 17 April 2026 04:54:31 +0000 (0:00:03.073) 0:00:21.495 ********** 2026-04-17 04:54:47.061617 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-17 04:54:47.061629 | orchestrator | 2026-04-17 04:54:47.061641 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-17 04:54:47.061654 | orchestrator | Friday 17 April 2026 04:54:35 +0000 (0:00:03.785) 0:00:25.280 ********** 2026-04-17 04:54:47.061667 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:54:47.061680 | orchestrator | 2026-04-17 04:54:47.061693 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-17 04:54:47.061706 | orchestrator | Friday 17 April 2026 04:54:38 +0000 (0:00:03.212) 0:00:28.493 ********** 2026-04-17 04:54:47.061718 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:54:47.061730 | orchestrator | 2026-04-17 04:54:47.061743 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-17 04:54:47.061756 | orchestrator | Friday 17 April 2026 04:54:42 +0000 (0:00:03.727) 0:00:32.221 ********** 2026-04-17 04:54:47.061769 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:54:47.061779 | orchestrator | 2026-04-17 04:54:47.061790 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-17 04:54:47.061801 | orchestrator | Friday 17 April 2026 04:54:45 +0000 (0:00:03.292) 0:00:35.514 ********** 2026-04-17 04:54:47.061833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:47.061850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:47.061871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:47.061888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:47.061900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:47.061920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:54.649464 | orchestrator | 2026-04-17 04:54:54.649610 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-17 04:54:54.649640 | orchestrator | Friday 17 April 2026 04:54:47 +0000 (0:00:01.618) 0:00:37.133 ********** 2026-04-17 04:54:54.649661 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:54:54.649677 | orchestrator | 2026-04-17 04:54:54.649688 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-17 04:54:54.649700 | orchestrator | Friday 17 April 2026 04:54:47 +0000 (0:00:00.134) 0:00:37.267 ********** 2026-04-17 04:54:54.649711 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:54:54.649722 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:54:54.649733 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:54:54.649744 | orchestrator | 2026-04-17 04:54:54.649755 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-17 04:54:54.649792 | orchestrator | Friday 17 April 2026 04:54:47 +0000 (0:00:00.309) 0:00:37.576 ********** 2026-04-17 04:54:54.649803 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 04:54:54.649814 | orchestrator | 2026-04-17 04:54:54.649825 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-17 04:54:54.649836 | orchestrator | Friday 17 April 2026 04:54:48 +0000 (0:00:00.916) 0:00:38.493 ********** 2026-04-17 04:54:54.649849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:54.649880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:54.649894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:54.649929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:54.649953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:54.649967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:54.649980 | orchestrator | 2026-04-17 04:54:54.649993 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-17 04:54:54.650011 | orchestrator | Friday 17 April 2026 04:54:50 +0000 (0:00:02.470) 0:00:40.964 ********** 2026-04-17 04:54:54.650107 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:54:54.650122 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:54:54.650135 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:54:54.650147 | orchestrator | 2026-04-17 04:54:54.650159 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-17 04:54:54.650171 | orchestrator | Friday 17 April 2026 04:54:51 +0000 (0:00:00.547) 0:00:41.512 ********** 2026-04-17 04:54:54.650185 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 04:54:54.650198 | orchestrator | 2026-04-17 04:54:54.650209 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-17 04:54:54.650220 | orchestrator | Friday 17 April 2026 04:54:52 +0000 (0:00:00.590) 0:00:42.103 ********** 2026-04-17 04:54:54.650231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:54.650252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:55.631839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:55.631944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:55.631979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:55.631993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:54:55.632006 | orchestrator | 2026-04-17 04:54:55.632019 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-17 04:54:55.632032 | orchestrator | Friday 17 April 2026 04:54:54 +0000 (0:00:02.629) 0:00:44.732 ********** 2026-04-17 04:54:55.632061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:54:55.632097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:54:55.632110 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:54:55.632129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:54:55.632141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:54:55.632153 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:54:55.632165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:54:55.632192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:54:59.204459 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:54:59.204572 | orchestrator | 2026-04-17 04:54:59.204589 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-17 04:54:59.204601 | orchestrator | Friday 17 April 2026 04:54:55 +0000 (0:00:00.976) 0:00:45.708 ********** 2026-04-17 04:54:59.204615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:54:59.204648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:54:59.204662 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:54:59.204674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:54:59.204707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:54:59.204719 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:54:59.204749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:54:59.204761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:54:59.204773 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:54:59.204784 | orchestrator | 2026-04-17 04:54:59.204795 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-17 04:54:59.204806 | orchestrator | Friday 17 April 2026 04:54:56 +0000 (0:00:00.945) 0:00:46.654 ********** 2026-04-17 04:54:59.204823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:59.204836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:54:59.204862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:55:05.668723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:55:05.668831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:55:05.668847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:55:05.668858 | orchestrator | 2026-04-17 04:55:05.668886 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-17 04:55:05.668898 | orchestrator | Friday 17 April 2026 04:54:59 +0000 (0:00:02.629) 0:00:49.283 ********** 2026-04-17 04:55:05.668908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:55:05.668934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:55:05.668945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:55:05.668960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:55:05.668970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:55:05.668987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:55:05.668997 | orchestrator | 2026-04-17 04:55:05.669007 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-17 04:55:05.669017 | orchestrator | Friday 17 April 2026 04:55:04 +0000 (0:00:05.785) 0:00:55.068 ********** 2026-04-17 04:55:05.669034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:55:07.478609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:55:07.478697 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:55:07.478730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:55:07.478764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:55:07.478776 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:55:07.478788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 04:55:07.478816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 04:55:07.478828 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:55:07.478840 | orchestrator | 2026-04-17 04:55:07.478852 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-17 04:55:07.478864 | orchestrator | Friday 17 April 2026 04:55:05 +0000 (0:00:00.688) 0:00:55.756 ********** 2026-04-17 04:55:07.478881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:55:07.478893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:55:07.478913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 04:55:07.478924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:55:07.478944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:56:00.028365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 04:56:00.028567 | orchestrator | 2026-04-17 04:56:00.028591 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-17 04:56:00.028604 | orchestrator | Friday 17 April 2026 04:55:07 +0000 (0:00:01.805) 0:00:57.562 ********** 2026-04-17 04:56:00.028616 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:56:00.028629 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:56:00.028639 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:56:00.028650 | orchestrator | 2026-04-17 04:56:00.028661 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-17 04:56:00.028672 | orchestrator | Friday 17 April 2026 04:55:08 +0000 (0:00:00.579) 0:00:58.141 ********** 2026-04-17 04:56:00.028683 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:56:00.028694 | orchestrator | 2026-04-17 04:56:00.028705 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-17 04:56:00.028715 | orchestrator | Friday 17 April 2026 04:55:10 +0000 (0:00:02.011) 0:01:00.152 ********** 2026-04-17 04:56:00.028726 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:56:00.028737 | orchestrator | 2026-04-17 04:56:00.028748 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-17 04:56:00.028759 | orchestrator | Friday 17 April 2026 04:55:12 +0000 (0:00:02.150) 0:01:02.303 ********** 2026-04-17 04:56:00.028770 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:56:00.028781 | orchestrator | 2026-04-17 04:56:00.028791 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 04:56:00.028802 | orchestrator | Friday 17 April 2026 04:55:28 +0000 (0:00:15.945) 0:01:18.249 ********** 2026-04-17 04:56:00.028813 | orchestrator | 2026-04-17 04:56:00.028824 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 04:56:00.028835 | orchestrator | Friday 17 April 2026 04:55:28 +0000 (0:00:00.073) 0:01:18.323 ********** 2026-04-17 04:56:00.028846 | orchestrator | 2026-04-17 04:56:00.028859 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 04:56:00.028872 | orchestrator | Friday 17 April 2026 04:55:28 +0000 (0:00:00.072) 0:01:18.395 ********** 2026-04-17 04:56:00.028885 | orchestrator | 2026-04-17 04:56:00.028898 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-17 04:56:00.028911 | orchestrator | Friday 17 April 2026 04:55:28 +0000 (0:00:00.076) 0:01:18.472 ********** 2026-04-17 04:56:00.028924 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:56:00.028937 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:56:00.028951 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:56:00.028964 | orchestrator | 2026-04-17 04:56:00.028976 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-17 04:56:00.028989 | orchestrator | Friday 17 April 2026 04:55:48 +0000 (0:00:19.706) 0:01:38.178 ********** 2026-04-17 04:56:00.029003 | orchestrator | changed: [testbed-node-0] 2026-04-17 04:56:00.029016 | orchestrator | changed: [testbed-node-1] 2026-04-17 04:56:00.029028 | orchestrator | changed: [testbed-node-2] 2026-04-17 04:56:00.029041 | orchestrator | 2026-04-17 04:56:00.029054 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:56:00.029068 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 04:56:00.029082 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:56:00.029095 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 04:56:00.029108 | orchestrator | 2026-04-17 04:56:00.029122 | orchestrator | 2026-04-17 04:56:00.029134 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:56:00.029147 | orchestrator | Friday 17 April 2026 04:55:59 +0000 (0:00:11.439) 0:01:49.618 ********** 2026-04-17 04:56:00.029168 | orchestrator | =============================================================================== 2026-04-17 04:56:00.029181 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.71s 2026-04-17 04:56:00.029194 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.95s 2026-04-17 04:56:00.029206 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.44s 2026-04-17 04:56:00.029216 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.21s 2026-04-17 04:56:00.029228 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.79s 2026-04-17 04:56:00.029238 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.79s 2026-04-17 04:56:00.029249 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.79s 2026-04-17 04:56:00.029279 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.73s 2026-04-17 04:56:00.029291 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.40s 2026-04-17 04:56:00.029302 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.30s 2026-04-17 04:56:00.029312 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.29s 2026-04-17 04:56:00.029323 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.21s 2026-04-17 04:56:00.029334 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.07s 2026-04-17 04:56:00.029344 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.63s 2026-04-17 04:56:00.029355 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.63s 2026-04-17 04:56:00.029366 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.47s 2026-04-17 04:56:00.029384 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.15s 2026-04-17 04:56:00.029395 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.01s 2026-04-17 04:56:00.029406 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.81s 2026-04-17 04:56:00.029484 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.62s 2026-04-17 04:56:00.810338 | orchestrator | ok: Runtime: 1:40:18.850618 2026-04-17 04:56:01.061183 | 2026-04-17 04:56:01.061325 | TASK [Deploy in a nutshell] 2026-04-17 04:56:01.596552 | orchestrator | skipping: Conditional result was False 2026-04-17 04:56:01.619885 | 2026-04-17 04:56:01.620095 | TASK [Bootstrap services] 2026-04-17 04:56:02.305774 | orchestrator | 2026-04-17 04:56:02.305949 | orchestrator | # BOOTSTRAP 2026-04-17 04:56:02.305969 | orchestrator | 2026-04-17 04:56:02.305982 | orchestrator | + set -e 2026-04-17 04:56:02.305993 | orchestrator | + echo 2026-04-17 04:56:02.306006 | orchestrator | + echo '# BOOTSTRAP' 2026-04-17 04:56:02.306057 | orchestrator | + echo 2026-04-17 04:56:02.306098 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-17 04:56:02.315482 | orchestrator | + set -e 2026-04-17 04:56:02.315558 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-17 04:56:04.472994 | orchestrator | 2026-04-17 04:56:04 | INFO  | It takes a moment until task 0cac1185-129c-49a8-8fa9-1d9762267ad5 (flavor-manager) has been started and output is visible here. 2026-04-17 04:56:11.911888 | orchestrator | 2026-04-17 04:56:07 | INFO  | Flavor SCS-1L-1 created 2026-04-17 04:56:11.912018 | orchestrator | 2026-04-17 04:56:07 | INFO  | Flavor SCS-1L-1-5 created 2026-04-17 04:56:11.912037 | orchestrator | 2026-04-17 04:56:08 | INFO  | Flavor SCS-1V-2 created 2026-04-17 04:56:11.912049 | orchestrator | 2026-04-17 04:56:08 | INFO  | Flavor SCS-1V-2-5 created 2026-04-17 04:56:11.912060 | orchestrator | 2026-04-17 04:56:08 | INFO  | Flavor SCS-1V-4 created 2026-04-17 04:56:11.912072 | orchestrator | 2026-04-17 04:56:08 | INFO  | Flavor SCS-1V-4-10 created 2026-04-17 04:56:11.912083 | orchestrator | 2026-04-17 04:56:08 | INFO  | Flavor SCS-1V-8 created 2026-04-17 04:56:11.912095 | orchestrator | 2026-04-17 04:56:08 | INFO  | Flavor SCS-1V-8-20 created 2026-04-17 04:56:11.912117 | orchestrator | 2026-04-17 04:56:08 | INFO  | Flavor SCS-2V-4 created 2026-04-17 04:56:11.912128 | orchestrator | 2026-04-17 04:56:09 | INFO  | Flavor SCS-2V-4-10 created 2026-04-17 04:56:11.912139 | orchestrator | 2026-04-17 04:56:09 | INFO  | Flavor SCS-2V-8 created 2026-04-17 04:56:11.912150 | orchestrator | 2026-04-17 04:56:09 | INFO  | Flavor SCS-2V-8-20 created 2026-04-17 04:56:11.912161 | orchestrator | 2026-04-17 04:56:09 | INFO  | Flavor SCS-2V-16 created 2026-04-17 04:56:11.912172 | orchestrator | 2026-04-17 04:56:09 | INFO  | Flavor SCS-2V-16-50 created 2026-04-17 04:56:11.912183 | orchestrator | 2026-04-17 04:56:09 | INFO  | Flavor SCS-4V-8 created 2026-04-17 04:56:11.912194 | orchestrator | 2026-04-17 04:56:09 | INFO  | Flavor SCS-4V-8-20 created 2026-04-17 04:56:11.912204 | orchestrator | 2026-04-17 04:56:10 | INFO  | Flavor SCS-4V-16 created 2026-04-17 04:56:11.912215 | orchestrator | 2026-04-17 04:56:10 | INFO  | Flavor SCS-4V-16-50 created 2026-04-17 04:56:11.912226 | orchestrator | 2026-04-17 04:56:10 | INFO  | Flavor SCS-4V-32 created 2026-04-17 04:56:11.912236 | orchestrator | 2026-04-17 04:56:10 | INFO  | Flavor SCS-4V-32-100 created 2026-04-17 04:56:11.912247 | orchestrator | 2026-04-17 04:56:10 | INFO  | Flavor SCS-8V-16 created 2026-04-17 04:56:11.912258 | orchestrator | 2026-04-17 04:56:10 | INFO  | Flavor SCS-8V-16-50 created 2026-04-17 04:56:11.912269 | orchestrator | 2026-04-17 04:56:10 | INFO  | Flavor SCS-8V-32 created 2026-04-17 04:56:11.912280 | orchestrator | 2026-04-17 04:56:10 | INFO  | Flavor SCS-8V-32-100 created 2026-04-17 04:56:11.912291 | orchestrator | 2026-04-17 04:56:11 | INFO  | Flavor SCS-16V-32 created 2026-04-17 04:56:11.912302 | orchestrator | 2026-04-17 04:56:11 | INFO  | Flavor SCS-16V-32-100 created 2026-04-17 04:56:11.912313 | orchestrator | 2026-04-17 04:56:11 | INFO  | Flavor SCS-2V-4-20s created 2026-04-17 04:56:11.912323 | orchestrator | 2026-04-17 04:56:11 | INFO  | Flavor SCS-4V-8-50s created 2026-04-17 04:56:11.912334 | orchestrator | 2026-04-17 04:56:11 | INFO  | Flavor SCS-8V-32-100s created 2026-04-17 04:56:14.378683 | orchestrator | 2026-04-17 04:56:14 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-17 04:56:24.478265 | orchestrator | 2026-04-17 04:56:24 | INFO  | Task 019f36a1-0dbd-4f32-af3b-539dbfcf40db (bootstrap-basic) was prepared for execution. 2026-04-17 04:56:24.478392 | orchestrator | 2026-04-17 04:56:24 | INFO  | It takes a moment until task 019f36a1-0dbd-4f32-af3b-539dbfcf40db (bootstrap-basic) has been started and output is visible here. 2026-04-17 04:57:09.264253 | orchestrator | 2026-04-17 04:57:09.264344 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-17 04:57:09.264353 | orchestrator | 2026-04-17 04:57:09.264359 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 04:57:09.264365 | orchestrator | Friday 17 April 2026 04:56:29 +0000 (0:00:00.080) 0:00:00.080 ********** 2026-04-17 04:57:09.264371 | orchestrator | ok: [localhost] 2026-04-17 04:57:09.264378 | orchestrator | 2026-04-17 04:57:09.264383 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-17 04:57:09.264389 | orchestrator | Friday 17 April 2026 04:56:31 +0000 (0:00:02.041) 0:00:02.121 ********** 2026-04-17 04:57:09.264394 | orchestrator | ok: [localhost] 2026-04-17 04:57:09.264400 | orchestrator | 2026-04-17 04:57:09.264405 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-17 04:57:09.264411 | orchestrator | Friday 17 April 2026 04:56:38 +0000 (0:00:07.592) 0:00:09.714 ********** 2026-04-17 04:57:09.264417 | orchestrator | changed: [localhost] 2026-04-17 04:57:09.264422 | orchestrator | 2026-04-17 04:57:09.264428 | orchestrator | TASK [Create public network] *************************************************** 2026-04-17 04:57:09.264434 | orchestrator | Friday 17 April 2026 04:56:45 +0000 (0:00:06.694) 0:00:16.408 ********** 2026-04-17 04:57:09.264439 | orchestrator | changed: [localhost] 2026-04-17 04:57:09.264445 | orchestrator | 2026-04-17 04:57:09.264450 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-17 04:57:09.264456 | orchestrator | Friday 17 April 2026 04:56:50 +0000 (0:00:05.215) 0:00:21.624 ********** 2026-04-17 04:57:09.264465 | orchestrator | changed: [localhost] 2026-04-17 04:57:09.264471 | orchestrator | 2026-04-17 04:57:09.264477 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-17 04:57:09.264482 | orchestrator | Friday 17 April 2026 04:56:57 +0000 (0:00:06.556) 0:00:28.180 ********** 2026-04-17 04:57:09.264488 | orchestrator | changed: [localhost] 2026-04-17 04:57:09.264493 | orchestrator | 2026-04-17 04:57:09.264499 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-17 04:57:09.264504 | orchestrator | Friday 17 April 2026 04:57:01 +0000 (0:00:04.344) 0:00:32.525 ********** 2026-04-17 04:57:09.264509 | orchestrator | changed: [localhost] 2026-04-17 04:57:09.264515 | orchestrator | 2026-04-17 04:57:09.264521 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-17 04:57:09.264534 | orchestrator | Friday 17 April 2026 04:57:05 +0000 (0:00:03.798) 0:00:36.323 ********** 2026-04-17 04:57:09.264539 | orchestrator | ok: [localhost] 2026-04-17 04:57:09.264545 | orchestrator | 2026-04-17 04:57:09.264550 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:57:09.264576 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 04:57:09.264582 | orchestrator | 2026-04-17 04:57:09.264588 | orchestrator | 2026-04-17 04:57:09.264593 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:57:09.264599 | orchestrator | Friday 17 April 2026 04:57:08 +0000 (0:00:03.611) 0:00:39.935 ********** 2026-04-17 04:57:09.264604 | orchestrator | =============================================================================== 2026-04-17 04:57:09.264609 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.59s 2026-04-17 04:57:09.264615 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.69s 2026-04-17 04:57:09.264620 | orchestrator | Set public network to default ------------------------------------------- 6.56s 2026-04-17 04:57:09.264626 | orchestrator | Create public network --------------------------------------------------- 5.22s 2026-04-17 04:57:09.264656 | orchestrator | Create public subnet ---------------------------------------------------- 4.34s 2026-04-17 04:57:09.264662 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.80s 2026-04-17 04:57:09.264667 | orchestrator | Create manager role ----------------------------------------------------- 3.61s 2026-04-17 04:57:09.264680 | orchestrator | Gathering Facts --------------------------------------------------------- 2.04s 2026-04-17 04:57:11.860962 | orchestrator | 2026-04-17 04:57:11 | INFO  | It takes a moment until task 53d891b6-43c4-4a1b-9d91-e1f494c25b2f (image-manager) has been started and output is visible here. 2026-04-17 04:57:54.341835 | orchestrator | 2026-04-17 04:57:14 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-17 04:57:54.341955 | orchestrator | 2026-04-17 04:57:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-17 04:57:54.341972 | orchestrator | 2026-04-17 04:57:14 | INFO  | Importing image Cirros 0.6.2 2026-04-17 04:57:54.341984 | orchestrator | 2026-04-17 04:57:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-17 04:57:54.341996 | orchestrator | 2026-04-17 04:57:16 | INFO  | Waiting for image to leave queued state... 2026-04-17 04:57:54.342008 | orchestrator | 2026-04-17 04:57:18 | INFO  | Waiting for import to complete... 2026-04-17 04:57:54.342070 | orchestrator | 2026-04-17 04:57:29 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-17 04:57:54.342084 | orchestrator | 2026-04-17 04:57:29 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-17 04:57:54.342095 | orchestrator | 2026-04-17 04:57:29 | INFO  | Setting internal_version = 0.6.2 2026-04-17 04:57:54.342106 | orchestrator | 2026-04-17 04:57:29 | INFO  | Setting image_original_user = cirros 2026-04-17 04:57:54.342118 | orchestrator | 2026-04-17 04:57:29 | INFO  | Adding tag os:cirros 2026-04-17 04:57:54.342128 | orchestrator | 2026-04-17 04:57:29 | INFO  | Setting property architecture: x86_64 2026-04-17 04:57:54.342139 | orchestrator | 2026-04-17 04:57:30 | INFO  | Setting property hw_disk_bus: scsi 2026-04-17 04:57:54.342150 | orchestrator | 2026-04-17 04:57:30 | INFO  | Setting property hw_rng_model: virtio 2026-04-17 04:57:54.342161 | orchestrator | 2026-04-17 04:57:30 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-17 04:57:54.342172 | orchestrator | 2026-04-17 04:57:31 | INFO  | Setting property hw_watchdog_action: reset 2026-04-17 04:57:54.342183 | orchestrator | 2026-04-17 04:57:31 | INFO  | Setting property hypervisor_type: qemu 2026-04-17 04:57:54.342194 | orchestrator | 2026-04-17 04:57:31 | INFO  | Setting property os_distro: cirros 2026-04-17 04:57:54.342205 | orchestrator | 2026-04-17 04:57:31 | INFO  | Setting property os_purpose: minimal 2026-04-17 04:57:54.342215 | orchestrator | 2026-04-17 04:57:31 | INFO  | Setting property replace_frequency: never 2026-04-17 04:57:54.342226 | orchestrator | 2026-04-17 04:57:32 | INFO  | Setting property uuid_validity: none 2026-04-17 04:57:54.342237 | orchestrator | 2026-04-17 04:57:32 | INFO  | Setting property provided_until: none 2026-04-17 04:57:54.342248 | orchestrator | 2026-04-17 04:57:32 | INFO  | Setting property image_description: Cirros 2026-04-17 04:57:54.342258 | orchestrator | 2026-04-17 04:57:32 | INFO  | Setting property image_name: Cirros 2026-04-17 04:57:54.342269 | orchestrator | 2026-04-17 04:57:33 | INFO  | Setting property internal_version: 0.6.2 2026-04-17 04:57:54.342280 | orchestrator | 2026-04-17 04:57:33 | INFO  | Setting property image_original_user: cirros 2026-04-17 04:57:54.342314 | orchestrator | 2026-04-17 04:57:33 | INFO  | Setting property os_version: 0.6.2 2026-04-17 04:57:54.342336 | orchestrator | 2026-04-17 04:57:34 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-17 04:57:54.342350 | orchestrator | 2026-04-17 04:57:34 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-17 04:57:54.342362 | orchestrator | 2026-04-17 04:57:34 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-17 04:57:54.342375 | orchestrator | 2026-04-17 04:57:34 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-17 04:57:54.342387 | orchestrator | 2026-04-17 04:57:34 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-17 04:57:54.342418 | orchestrator | 2026-04-17 04:57:34 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-17 04:57:54.342446 | orchestrator | 2026-04-17 04:57:34 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-17 04:57:54.342460 | orchestrator | 2026-04-17 04:57:34 | INFO  | Importing image Cirros 0.6.3 2026-04-17 04:57:54.342471 | orchestrator | 2026-04-17 04:57:34 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-17 04:57:54.342484 | orchestrator | 2026-04-17 04:57:36 | INFO  | Waiting for image to leave queued state... 2026-04-17 04:57:54.342496 | orchestrator | 2026-04-17 04:57:38 | INFO  | Waiting for import to complete... 2026-04-17 04:57:54.342531 | orchestrator | 2026-04-17 04:57:48 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-17 04:57:54.342552 | orchestrator | 2026-04-17 04:57:48 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-17 04:57:54.342573 | orchestrator | 2026-04-17 04:57:48 | INFO  | Setting internal_version = 0.6.3 2026-04-17 04:57:54.342593 | orchestrator | 2026-04-17 04:57:48 | INFO  | Setting image_original_user = cirros 2026-04-17 04:57:54.342614 | orchestrator | 2026-04-17 04:57:48 | INFO  | Adding tag os:cirros 2026-04-17 04:57:54.342634 | orchestrator | 2026-04-17 04:57:49 | INFO  | Setting property architecture: x86_64 2026-04-17 04:57:54.342682 | orchestrator | 2026-04-17 04:57:49 | INFO  | Setting property hw_disk_bus: scsi 2026-04-17 04:57:54.342701 | orchestrator | 2026-04-17 04:57:49 | INFO  | Setting property hw_rng_model: virtio 2026-04-17 04:57:54.342719 | orchestrator | 2026-04-17 04:57:49 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-17 04:57:54.342738 | orchestrator | 2026-04-17 04:57:49 | INFO  | Setting property hw_watchdog_action: reset 2026-04-17 04:57:54.342757 | orchestrator | 2026-04-17 04:57:50 | INFO  | Setting property hypervisor_type: qemu 2026-04-17 04:57:54.342776 | orchestrator | 2026-04-17 04:57:50 | INFO  | Setting property os_distro: cirros 2026-04-17 04:57:54.342794 | orchestrator | 2026-04-17 04:57:50 | INFO  | Setting property os_purpose: minimal 2026-04-17 04:57:54.342810 | orchestrator | 2026-04-17 04:57:50 | INFO  | Setting property replace_frequency: never 2026-04-17 04:57:54.342821 | orchestrator | 2026-04-17 04:57:51 | INFO  | Setting property uuid_validity: none 2026-04-17 04:57:54.342832 | orchestrator | 2026-04-17 04:57:51 | INFO  | Setting property provided_until: none 2026-04-17 04:57:54.342843 | orchestrator | 2026-04-17 04:57:51 | INFO  | Setting property image_description: Cirros 2026-04-17 04:57:54.342853 | orchestrator | 2026-04-17 04:57:51 | INFO  | Setting property image_name: Cirros 2026-04-17 04:57:54.342863 | orchestrator | 2026-04-17 04:57:51 | INFO  | Setting property internal_version: 0.6.3 2026-04-17 04:57:54.342885 | orchestrator | 2026-04-17 04:57:52 | INFO  | Setting property image_original_user: cirros 2026-04-17 04:57:54.342895 | orchestrator | 2026-04-17 04:57:52 | INFO  | Setting property os_version: 0.6.3 2026-04-17 04:57:54.342906 | orchestrator | 2026-04-17 04:57:52 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-17 04:57:54.342917 | orchestrator | 2026-04-17 04:57:52 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-17 04:57:54.342927 | orchestrator | 2026-04-17 04:57:53 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-17 04:57:54.342937 | orchestrator | 2026-04-17 04:57:53 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-17 04:57:54.342948 | orchestrator | 2026-04-17 04:57:53 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-17 04:57:54.720308 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-17 04:57:57.237367 | orchestrator | 2026-04-17 04:57:57 | INFO  | date: 2026-04-17 2026-04-17 04:57:57.237470 | orchestrator | 2026-04-17 04:57:57 | INFO  | image: octavia-amphora-haproxy-2024.2.20260417.qcow2 2026-04-17 04:57:57.237509 | orchestrator | 2026-04-17 04:57:57 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260417.qcow2 2026-04-17 04:57:57.237524 | orchestrator | 2026-04-17 04:57:57 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260417.qcow2.CHECKSUM 2026-04-17 04:57:57.425948 | orchestrator | 2026-04-17 04:57:57 | INFO  | checksum: df07420c7eb0f37be40271844ab1563a5b0580a536e9742deb5e3959aae2c061 2026-04-17 04:57:57.498469 | orchestrator | 2026-04-17 04:57:57 | INFO  | It takes a moment until task 3118de9b-0d3e-48c4-b935-74f5f0a3c603 (image-manager) has been started and output is visible here. 2026-04-17 04:58:59.386826 | orchestrator | 2026-04-17 04:57:59 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-17' 2026-04-17 04:58:59.386947 | orchestrator | 2026-04-17 04:58:00 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260417.qcow2: 200 2026-04-17 04:58:59.386966 | orchestrator | 2026-04-17 04:58:00 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-17 2026-04-17 04:58:59.386979 | orchestrator | 2026-04-17 04:58:00 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260417.qcow2 2026-04-17 04:58:59.386993 | orchestrator | 2026-04-17 04:58:01 | INFO  | Waiting for image to leave queued state... 2026-04-17 04:58:59.387005 | orchestrator | 2026-04-17 04:58:03 | INFO  | Waiting for import to complete... 2026-04-17 04:58:59.387018 | orchestrator | 2026-04-17 04:58:13 | INFO  | Waiting for import to complete... 2026-04-17 04:58:59.387030 | orchestrator | 2026-04-17 04:58:23 | INFO  | Waiting for import to complete... 2026-04-17 04:58:59.387042 | orchestrator | 2026-04-17 04:58:33 | INFO  | Waiting for import to complete... 2026-04-17 04:58:59.387057 | orchestrator | 2026-04-17 04:58:43 | INFO  | Waiting for import to complete... 2026-04-17 04:58:59.387070 | orchestrator | 2026-04-17 04:58:53 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-17' successfully completed, reloading images 2026-04-17 04:58:59.387084 | orchestrator | 2026-04-17 04:58:54 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-17' 2026-04-17 04:58:59.387096 | orchestrator | 2026-04-17 04:58:54 | INFO  | Setting internal_version = 2026-04-17 2026-04-17 04:58:59.387134 | orchestrator | 2026-04-17 04:58:54 | INFO  | Setting image_original_user = ubuntu 2026-04-17 04:58:59.387148 | orchestrator | 2026-04-17 04:58:54 | INFO  | Adding tag amphora 2026-04-17 04:58:59.387160 | orchestrator | 2026-04-17 04:58:54 | INFO  | Adding tag os:ubuntu 2026-04-17 04:58:59.387172 | orchestrator | 2026-04-17 04:58:54 | INFO  | Setting property architecture: x86_64 2026-04-17 04:58:59.387183 | orchestrator | 2026-04-17 04:58:55 | INFO  | Setting property hw_disk_bus: scsi 2026-04-17 04:58:59.387196 | orchestrator | 2026-04-17 04:58:55 | INFO  | Setting property hw_rng_model: virtio 2026-04-17 04:58:59.387209 | orchestrator | 2026-04-17 04:58:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-17 04:58:59.387220 | orchestrator | 2026-04-17 04:58:55 | INFO  | Setting property hw_watchdog_action: reset 2026-04-17 04:58:59.387232 | orchestrator | 2026-04-17 04:58:56 | INFO  | Setting property hypervisor_type: qemu 2026-04-17 04:58:59.387245 | orchestrator | 2026-04-17 04:58:56 | INFO  | Setting property os_distro: ubuntu 2026-04-17 04:58:59.387257 | orchestrator | 2026-04-17 04:58:56 | INFO  | Setting property replace_frequency: quarterly 2026-04-17 04:58:59.387270 | orchestrator | 2026-04-17 04:58:56 | INFO  | Setting property uuid_validity: last-1 2026-04-17 04:58:59.387282 | orchestrator | 2026-04-17 04:58:56 | INFO  | Setting property provided_until: none 2026-04-17 04:58:59.387294 | orchestrator | 2026-04-17 04:58:57 | INFO  | Setting property os_purpose: network 2026-04-17 04:58:59.387306 | orchestrator | 2026-04-17 04:58:57 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-17 04:58:59.387334 | orchestrator | 2026-04-17 04:58:57 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-17 04:58:59.387347 | orchestrator | 2026-04-17 04:58:57 | INFO  | Setting property internal_version: 2026-04-17 2026-04-17 04:58:59.387360 | orchestrator | 2026-04-17 04:58:58 | INFO  | Setting property image_original_user: ubuntu 2026-04-17 04:58:59.387371 | orchestrator | 2026-04-17 04:58:58 | INFO  | Setting property os_version: 2026-04-17 2026-04-17 04:58:59.387384 | orchestrator | 2026-04-17 04:58:58 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260417.qcow2 2026-04-17 04:58:59.387397 | orchestrator | 2026-04-17 04:58:58 | INFO  | Setting property image_build_date: 2026-04-17 2026-04-17 04:58:59.387410 | orchestrator | 2026-04-17 04:58:58 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-17' 2026-04-17 04:58:59.387421 | orchestrator | 2026-04-17 04:58:58 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-17' 2026-04-17 04:58:59.387433 | orchestrator | 2026-04-17 04:58:59 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-17 04:58:59.387465 | orchestrator | 2026-04-17 04:58:59 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-17 04:58:59.387479 | orchestrator | 2026-04-17 04:58:59 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-17 04:58:59.387491 | orchestrator | 2026-04-17 04:58:59 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-17 04:58:59.853067 | orchestrator | ok: Runtime: 0:02:57.850219 2026-04-17 04:58:59.867023 | 2026-04-17 04:58:59.867154 | TASK [Run checks] 2026-04-17 04:59:00.606860 | orchestrator | + set -e 2026-04-17 04:59:00.607002 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 04:59:00.607017 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 04:59:00.607029 | orchestrator | ++ INTERACTIVE=false 2026-04-17 04:59:00.607037 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 04:59:00.607044 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 04:59:00.607052 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 04:59:00.607673 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 04:59:00.612064 | orchestrator | 2026-04-17 04:59:00.612099 | orchestrator | # CHECK 2026-04-17 04:59:00.612106 | orchestrator | 2026-04-17 04:59:00.612113 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 04:59:00.612122 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 04:59:00.612129 | orchestrator | + echo 2026-04-17 04:59:00.612136 | orchestrator | + echo '# CHECK' 2026-04-17 04:59:00.612142 | orchestrator | + echo 2026-04-17 04:59:00.612152 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 04:59:00.612974 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-17 04:59:00.670826 | orchestrator | 2026-04-17 04:59:00.670927 | orchestrator | ## Containers @ testbed-manager 2026-04-17 04:59:00.670941 | orchestrator | 2026-04-17 04:59:00.670966 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 04:59:00.670978 | orchestrator | + echo 2026-04-17 04:59:00.670990 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-17 04:59:00.671002 | orchestrator | + echo 2026-04-17 04:59:00.671014 | orchestrator | + osism container testbed-manager ps 2026-04-17 04:59:02.736606 | orchestrator | 2026-04-17 04:59:02 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-17 04:59:03.146566 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 04:59:03.146688 | orchestrator | d7308968e8cd registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-04-17 04:59:03.146714 | orchestrator | 949bd3a429b5 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-04-17 04:59:03.146727 | orchestrator | a4f332fe3286 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-17 04:59:03.146739 | orchestrator | 722a1b051d8d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-17 04:59:03.146750 | orchestrator | dc91868bc120 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-04-17 04:59:03.146767 | orchestrator | cf90df4b9e0c registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 57 minutes ago Up 56 minutes cephclient 2026-04-17 04:59:03.146803 | orchestrator | 891cd0b533fd registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-17 04:59:03.146815 | orchestrator | b82d2bedc570 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-17 04:59:03.146851 | orchestrator | b49270d0a835 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-17 04:59:03.146865 | orchestrator | 564a746d53f3 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-04-17 04:59:03.146876 | orchestrator | ba8bc04417fc phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-04-17 04:59:03.146887 | orchestrator | 15be83bd3eec registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-04-17 04:59:03.146899 | orchestrator | f3ccb8ec2176 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-04-17 04:59:03.146910 | orchestrator | 80774d88d876 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-17 04:59:03.146939 | orchestrator | 9c7eda5c1a79 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-04-17 04:59:03.146961 | orchestrator | 163542557f20 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-04-17 04:59:03.146973 | orchestrator | 9e5c01556fdb registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-04-17 04:59:03.146984 | orchestrator | 722f268496b8 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-04-17 04:59:03.146995 | orchestrator | 7916bcd45473 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-04-17 04:59:03.147006 | orchestrator | 79af0cab1be0 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-17 04:59:03.147031 | orchestrator | 959d19334908 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-17 04:59:03.147042 | orchestrator | 9d3d8005a3ab registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-04-17 04:59:03.147064 | orchestrator | 70c3a71a5105 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-04-17 04:59:03.147084 | orchestrator | 03bf94ded36d registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-17 04:59:03.147095 | orchestrator | e7f7d2f5506c registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-17 04:59:03.147106 | orchestrator | afb2a1e0ac95 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-04-17 04:59:03.147117 | orchestrator | 620fdb444a19 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-04-17 04:59:03.147128 | orchestrator | c4f46cc4d835 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-04-17 04:59:03.147138 | orchestrator | 896818b8174c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-04-17 04:59:03.147155 | orchestrator | 86728f39d63f registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-17 04:59:03.566142 | orchestrator | 2026-04-17 04:59:03.566247 | orchestrator | ## Images @ testbed-manager 2026-04-17 04:59:03.566262 | orchestrator | 2026-04-17 04:59:03.566275 | orchestrator | + echo 2026-04-17 04:59:03.566287 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-17 04:59:03.566299 | orchestrator | + echo 2026-04-17 04:59:03.566315 | orchestrator | + osism container testbed-manager images 2026-04-17 04:59:06.065517 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 04:59:06.065658 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 9e238fdcbaa6 25 hours ago 238MB 2026-04-17 04:59:06.065676 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-17 04:59:06.065687 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-17 04:59:06.065699 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-17 04:59:06.065710 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-17 04:59:06.065721 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-17 04:59:06.065732 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-17 04:59:06.065746 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-17 04:59:06.065757 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-17 04:59:06.065862 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-17 04:59:06.065877 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-17 04:59:06.065887 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-17 04:59:06.065899 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-17 04:59:06.065909 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-17 04:59:06.065920 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-17 04:59:06.065931 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-17 04:59:06.065942 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-17 04:59:06.065953 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-17 04:59:06.065964 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 5 months ago 334MB 2026-04-17 04:59:06.065994 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-17 04:59:06.066005 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-17 04:59:06.066052 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-17 04:59:06.066066 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-17 04:59:06.066077 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-17 04:59:06.066088 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-17 04:59:06.465844 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 04:59:06.465960 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-17 04:59:06.529149 | orchestrator | 2026-04-17 04:59:06.529244 | orchestrator | ## Containers @ testbed-node-0 2026-04-17 04:59:06.529259 | orchestrator | 2026-04-17 04:59:06.529270 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 04:59:06.529281 | orchestrator | + echo 2026-04-17 04:59:06.529293 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-17 04:59:06.529305 | orchestrator | + echo 2026-04-17 04:59:06.529317 | orchestrator | + osism container testbed-node-0 ps 2026-04-17 04:59:09.064276 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 04:59:09.064352 | orchestrator | 06b920fca939 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-17 04:59:09.064372 | orchestrator | abcef633b17c registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-17 04:59:09.064377 | orchestrator | 353776ee8447 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-17 04:59:09.064381 | orchestrator | fdb368db3dce registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-17 04:59:09.064400 | orchestrator | de747570aa89 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-17 04:59:09.064404 | orchestrator | 91e647542d26 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-04-17 04:59:09.064434 | orchestrator | e6c63d27554a registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-17 04:59:09.064438 | orchestrator | af7c7b8097eb registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-17 04:59:09.064442 | orchestrator | 54d10f7846bb registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-17 04:59:09.064446 | orchestrator | 4cf5110cf87b registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-04-17 04:59:09.064450 | orchestrator | fee1c4beb993 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-17 04:59:09.064454 | orchestrator | 3cff531bdb55 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-17 04:59:09.064458 | orchestrator | d8e7c03c3b03 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-04-17 04:59:09.064461 | orchestrator | 17d8cc3b8ab4 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-04-17 04:59:09.064465 | orchestrator | 549a58ca1d42 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-04-17 04:59:09.064469 | orchestrator | bc14354599be registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-17 04:59:09.064472 | orchestrator | 3f4662523db0 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-04-17 04:59:09.064476 | orchestrator | 390179434ca0 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-04-17 04:59:09.064480 | orchestrator | 248afd013f19 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-04-17 04:59:09.064499 | orchestrator | 8555abb2d970 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-04-17 04:59:09.064503 | orchestrator | 0bd2b5119524 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-04-17 04:59:09.064507 | orchestrator | 5e9300055bce registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-04-17 04:59:09.064514 | orchestrator | c07df6f6d1cc registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) octavia_api 2026-04-17 04:59:09.064517 | orchestrator | 1ef2f0744eb6 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-04-17 04:59:09.064521 | orchestrator | 0fabba90e9b2 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-04-17 04:59:09.064528 | orchestrator | 90f1da9dc38e registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-04-17 04:59:09.064532 | orchestrator | 5d97fc121e77 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-17 04:59:09.064536 | orchestrator | 7726e3ef563e registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-17 04:59:09.064540 | orchestrator | dc32569304e6 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-17 04:59:09.064543 | orchestrator | 3f7259bf1547 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-04-17 04:59:09.064547 | orchestrator | 37e5b0aa9c00 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-04-17 04:59:09.064551 | orchestrator | b578e59ae1ed registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-04-17 04:59:09.064555 | orchestrator | f45e0c154bc0 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-04-17 04:59:09.064558 | orchestrator | 89878a645412 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-04-17 04:59:09.064562 | orchestrator | 0a601f1ee34c registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-04-17 04:59:09.064566 | orchestrator | 8f6fbd730275 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-04-17 04:59:09.064569 | orchestrator | 5458de6cca65 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-04-17 04:59:09.064573 | orchestrator | 06408721fcc4 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-04-17 04:59:09.064577 | orchestrator | 680b63d9ef93 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-04-17 04:59:09.064584 | orchestrator | 1f36508b937d registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-04-17 04:59:09.064591 | orchestrator | 56652383b050 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-04-17 04:59:09.064596 | orchestrator | 909d65adc06c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-04-17 04:59:09.064602 | orchestrator | d03d0f0c8891 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-04-17 04:59:09.064606 | orchestrator | ee7ad9d5e119 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-04-17 04:59:09.064609 | orchestrator | 8f772f740719 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-04-17 04:59:09.064613 | orchestrator | 17fb2c2c6d28 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-04-17 04:59:09.064617 | orchestrator | c1e4151023de registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-04-17 04:59:09.064620 | orchestrator | 970c1ff86cdf registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-04-17 04:59:09.064624 | orchestrator | 1b905e75d3fe registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-04-17 04:59:09.064628 | orchestrator | dd4b83460efe registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-0 2026-04-17 04:59:09.064631 | orchestrator | cfb27ef93a9a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-17 04:59:09.064635 | orchestrator | aa031f9a4b08 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-04-17 04:59:09.064639 | orchestrator | 53b4af59e3eb registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-17 04:59:09.064643 | orchestrator | 9b08e8dca9b8 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-17 04:59:09.064646 | orchestrator | 3db5e78144df registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-17 04:59:09.064650 | orchestrator | 6b6224c0a777 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-17 04:59:09.064656 | orchestrator | e4a1722c74f8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-17 04:59:09.064660 | orchestrator | 789dd16e096a registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-17 04:59:09.064667 | orchestrator | a3294e21a229 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-17 04:59:09.064673 | orchestrator | a57561788a70 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-17 04:59:09.064677 | orchestrator | 1cd174859596 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-17 04:59:09.064681 | orchestrator | f1c6ac422f49 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-17 04:59:09.064685 | orchestrator | 258d5d7d1e67 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-04-17 04:59:09.064688 | orchestrator | 74216a98cae7 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-04-17 04:59:09.064692 | orchestrator | 41f27d10173f registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-04-17 04:59:09.064696 | orchestrator | b281f359a7f3 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-04-17 04:59:09.064699 | orchestrator | 427d626db490 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-04-17 04:59:09.064703 | orchestrator | 6a2e2b15bc5a registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-04-17 04:59:09.064707 | orchestrator | 4007766af0df registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-17 04:59:09.064711 | orchestrator | 34ea712b761c registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-17 04:59:09.064714 | orchestrator | 7716baecfb01 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-17 04:59:09.417509 | orchestrator | 2026-04-17 04:59:09.417616 | orchestrator | ## Images @ testbed-node-0 2026-04-17 04:59:09.417633 | orchestrator | 2026-04-17 04:59:09.417646 | orchestrator | + echo 2026-04-17 04:59:09.417658 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-17 04:59:09.417670 | orchestrator | + echo 2026-04-17 04:59:09.417681 | orchestrator | + osism container testbed-node-0 images 2026-04-17 04:59:12.019055 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 04:59:12.019191 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-17 04:59:12.019208 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-17 04:59:12.019220 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-17 04:59:12.019238 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-17 04:59:12.019272 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-17 04:59:12.019283 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-17 04:59:12.019294 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-17 04:59:12.019305 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-17 04:59:12.019315 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-17 04:59:12.019326 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-17 04:59:12.019336 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-17 04:59:12.019347 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-17 04:59:12.019357 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-17 04:59:12.019368 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-17 04:59:12.019378 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-17 04:59:12.019389 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-17 04:59:12.019399 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-17 04:59:12.019410 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-17 04:59:12.019421 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-17 04:59:12.019431 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-17 04:59:12.019442 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-17 04:59:12.019452 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-17 04:59:12.039826 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-17 04:59:12.039903 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-17 04:59:12.039916 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-17 04:59:12.039928 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-17 04:59:12.039939 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-17 04:59:12.039967 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-17 04:59:12.039979 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-17 04:59:12.039990 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-17 04:59:12.040018 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-17 04:59:12.040029 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-17 04:59:12.040040 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-17 04:59:12.040050 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-17 04:59:12.040061 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-17 04:59:12.040072 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-17 04:59:12.040082 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-17 04:59:12.040093 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-17 04:59:12.040104 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-17 04:59:12.040115 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-17 04:59:12.040126 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-17 04:59:12.040136 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-17 04:59:12.040147 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-17 04:59:12.040158 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-17 04:59:12.040169 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-17 04:59:12.040179 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-17 04:59:12.040190 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-17 04:59:12.040201 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-17 04:59:12.040212 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-17 04:59:12.040222 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-17 04:59:12.040233 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-17 04:59:12.040244 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-17 04:59:12.040254 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-17 04:59:12.040265 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-17 04:59:12.040292 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-17 04:59:12.040304 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-17 04:59:12.040321 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-17 04:59:12.040332 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-17 04:59:12.040349 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-17 04:59:12.040362 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-17 04:59:12.040374 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-17 04:59:12.040387 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-17 04:59:12.040399 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-17 04:59:12.040411 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-17 04:59:12.040423 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-17 04:59:12.040435 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-17 04:59:12.040447 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-17 04:59:12.040460 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-17 04:59:12.040473 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-17 04:59:12.400602 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 04:59:12.401021 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-17 04:59:12.450416 | orchestrator | 2026-04-17 04:59:12.450505 | orchestrator | ## Containers @ testbed-node-1 2026-04-17 04:59:12.450525 | orchestrator | 2026-04-17 04:59:12.450537 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 04:59:12.450549 | orchestrator | + echo 2026-04-17 04:59:12.450561 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-17 04:59:12.450573 | orchestrator | + echo 2026-04-17 04:59:12.450585 | orchestrator | + osism container testbed-node-1 ps 2026-04-17 04:59:14.899752 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 04:59:14.899901 | orchestrator | a13e20b79006 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-17 04:59:14.899920 | orchestrator | 8d15de459764 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-17 04:59:14.899932 | orchestrator | 880fb7128391 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-17 04:59:14.899944 | orchestrator | 8f6e87db6620 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-17 04:59:14.899956 | orchestrator | ab78c11c88ab registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-17 04:59:14.899967 | orchestrator | 801b2773dab9 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-17 04:59:14.900007 | orchestrator | aae07ab744b7 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-17 04:59:14.900022 | orchestrator | cbefa5ad5573 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-17 04:59:14.900041 | orchestrator | b250e8731ff5 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-17 04:59:14.900059 | orchestrator | 73aaff617ce7 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-04-17 04:59:14.900078 | orchestrator | 66ce95bfd9bc registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-17 04:59:14.900436 | orchestrator | 7c7d7c4fae08 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-17 04:59:14.900575 | orchestrator | 85b1b5d87331 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-04-17 04:59:14.900595 | orchestrator | cba3a157444d registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-04-17 04:59:14.900607 | orchestrator | 821f9aaa6c7d registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-04-17 04:59:14.900618 | orchestrator | ca54e2ec87f7 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-17 04:59:14.900629 | orchestrator | 5a4e469b20ee registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-04-17 04:59:14.900641 | orchestrator | cb01ccc5d09f registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-04-17 04:59:14.900652 | orchestrator | 8fb2a7bc2102 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-04-17 04:59:14.900663 | orchestrator | 959ea3f17b1e registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-04-17 04:59:14.900674 | orchestrator | a49055a0c7a5 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-04-17 04:59:14.900685 | orchestrator | 2ae61b454c22 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-04-17 04:59:14.900696 | orchestrator | 8373cecc8da6 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-04-17 04:59:14.900729 | orchestrator | 6dae38761754 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-04-17 04:59:14.900741 | orchestrator | 8a0600b5ffa8 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-04-17 04:59:14.900752 | orchestrator | e41dcaecc805 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-04-17 04:59:14.900762 | orchestrator | c0791844f89f registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-17 04:59:14.900773 | orchestrator | 9070cf30ee36 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-17 04:59:14.900784 | orchestrator | 88fc645f3975 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-17 04:59:14.900829 | orchestrator | 556b098752e2 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-04-17 04:59:14.900841 | orchestrator | 61aef8002b59 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-04-17 04:59:14.900873 | orchestrator | c000b961e4d5 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-04-17 04:59:14.900885 | orchestrator | 7ff0b399f1cc registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-04-17 04:59:14.900896 | orchestrator | eb79cc15903b registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-04-17 04:59:14.900906 | orchestrator | add92a7d56db registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-04-17 04:59:14.900917 | orchestrator | 5400bf38b784 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-04-17 04:59:14.900928 | orchestrator | edb3bba14525 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-04-17 04:59:14.900945 | orchestrator | 5e403145fb18 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-04-17 04:59:14.900956 | orchestrator | b9bbaacd3dc2 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-04-17 04:59:14.900967 | orchestrator | d563b87e3767 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-04-17 04:59:14.900980 | orchestrator | eca778c36295 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-04-17 04:59:14.900999 | orchestrator | 214f2c066dcd registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-04-17 04:59:14.901012 | orchestrator | 13eb58db1da0 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-04-17 04:59:14.901025 | orchestrator | dcd96bae8c97 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_scheduler 2026-04-17 04:59:14.901037 | orchestrator | cbf10f3db6fa registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) neutron_server 2026-04-17 04:59:14.901049 | orchestrator | 3c2ba620a601 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-04-17 04:59:14.901062 | orchestrator | 0d672f273550 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-04-17 04:59:14.901074 | orchestrator | a222c2e9d1d5 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-04-17 04:59:14.901086 | orchestrator | a2eabf12a6fc registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-04-17 04:59:14.901098 | orchestrator | f4ed44f2e6ea registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-1 2026-04-17 04:59:14.901111 | orchestrator | c8fe983822c3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-17 04:59:14.901125 | orchestrator | 9f8a3fd74f0b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-04-17 04:59:14.901151 | orchestrator | 780b11be3cc1 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-17 04:59:14.901163 | orchestrator | 66b24eaae085 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-17 04:59:14.901174 | orchestrator | ca2b54185529 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-17 04:59:14.901185 | orchestrator | f89474f7791a registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-17 04:59:14.901196 | orchestrator | d1bf0555cd94 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-17 04:59:14.901206 | orchestrator | c0751658a9cd registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-17 04:59:14.901217 | orchestrator | 8fad5231af7f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-17 04:59:14.901234 | orchestrator | a3006fb2be5c registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-17 04:59:14.901245 | orchestrator | b715f68ba7de registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-17 04:59:14.901256 | orchestrator | caeaf79fa2bf registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-17 04:59:14.901267 | orchestrator | 8f9b716e39eb registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-04-17 04:59:14.901277 | orchestrator | 8f6e03a36176 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-04-17 04:59:14.901288 | orchestrator | 115bfcec8683 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-04-17 04:59:14.901304 | orchestrator | 7bd4f65676b0 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-04-17 04:59:14.901315 | orchestrator | 03ab0ef126b5 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-04-17 04:59:14.901326 | orchestrator | c97e462d6e2e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-04-17 04:59:14.901342 | orchestrator | 50692f1feba7 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-17 04:59:14.901369 | orchestrator | b5a9858364dc registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-17 04:59:14.901390 | orchestrator | 4a3a547d3516 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-17 04:59:15.319107 | orchestrator | 2026-04-17 04:59:15.319225 | orchestrator | ## Images @ testbed-node-1 2026-04-17 04:59:15.319253 | orchestrator | 2026-04-17 04:59:15.319264 | orchestrator | + echo 2026-04-17 04:59:15.319276 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-17 04:59:15.319287 | orchestrator | + echo 2026-04-17 04:59:15.319298 | orchestrator | + osism container testbed-node-1 images 2026-04-17 04:59:17.859127 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 04:59:17.859234 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-17 04:59:17.859251 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-17 04:59:17.859264 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-17 04:59:17.859277 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-17 04:59:17.859289 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-17 04:59:17.859319 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-17 04:59:17.859330 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-17 04:59:17.859340 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-17 04:59:17.859351 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-17 04:59:17.859362 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-17 04:59:17.859372 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-17 04:59:17.859383 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-17 04:59:17.859393 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-17 04:59:17.859403 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-17 04:59:17.859414 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-17 04:59:17.859425 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-17 04:59:17.859435 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-17 04:59:17.859446 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-17 04:59:17.859457 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-17 04:59:17.859467 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-17 04:59:17.859478 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-17 04:59:17.859488 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-17 04:59:17.859498 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-17 04:59:17.859509 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-17 04:59:17.859519 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-17 04:59:17.859530 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-17 04:59:17.859540 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-17 04:59:17.859551 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-17 04:59:17.859562 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-17 04:59:17.859572 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-17 04:59:17.859583 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-17 04:59:17.859611 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-17 04:59:17.859629 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-17 04:59:17.859640 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-17 04:59:17.859650 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-17 04:59:17.859661 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-17 04:59:17.859671 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-17 04:59:17.859682 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-17 04:59:17.859692 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-17 04:59:17.859721 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-17 04:59:17.859733 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-17 04:59:17.859743 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-17 04:59:17.859753 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-17 04:59:17.859764 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-17 04:59:17.859774 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-17 04:59:17.859785 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-17 04:59:17.859795 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-17 04:59:17.859828 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-17 04:59:17.859838 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-17 04:59:17.859849 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-17 04:59:17.859859 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-17 04:59:17.859874 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-17 04:59:17.859892 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-17 04:59:17.859906 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-17 04:59:17.859917 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-17 04:59:17.859928 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-17 04:59:17.859938 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-17 04:59:17.859949 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-17 04:59:17.859967 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-17 04:59:17.859983 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-17 04:59:17.859994 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-17 04:59:17.860005 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-17 04:59:17.860015 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-17 04:59:17.860033 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-17 04:59:17.860044 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-17 04:59:17.860054 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-17 04:59:17.860065 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-17 04:59:17.860075 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-17 04:59:17.860086 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-17 04:59:18.238288 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 04:59:18.238446 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-17 04:59:18.301626 | orchestrator | 2026-04-17 04:59:18.301714 | orchestrator | ## Containers @ testbed-node-2 2026-04-17 04:59:18.301728 | orchestrator | 2026-04-17 04:59:18.301742 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 04:59:18.301761 | orchestrator | + echo 2026-04-17 04:59:18.301780 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-17 04:59:18.301792 | orchestrator | + echo 2026-04-17 04:59:18.301830 | orchestrator | + osism container testbed-node-2 ps 2026-04-17 04:59:20.935695 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 04:59:20.935857 | orchestrator | f765e1d77711 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-17 04:59:20.935877 | orchestrator | b064d5905e93 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-17 04:59:20.935889 | orchestrator | b59f7c228a72 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-17 04:59:20.935901 | orchestrator | 102585a9aa8e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-17 04:59:20.935913 | orchestrator | fbefd3a47dcc registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-17 04:59:20.935924 | orchestrator | fedaeb532874 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-17 04:59:20.935935 | orchestrator | 66d7171e8d7c registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-17 04:59:20.935970 | orchestrator | 7da7da85db4f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-17 04:59:20.935982 | orchestrator | b2837a56ba75 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-17 04:59:20.935993 | orchestrator | 144c3845c5d8 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-04-17 04:59:20.936004 | orchestrator | 719d2eaec58f registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-17 04:59:20.936015 | orchestrator | 8569b4b56118 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-17 04:59:20.936026 | orchestrator | 08694ec15f47 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-04-17 04:59:20.936037 | orchestrator | 19a84ea9f9f8 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-04-17 04:59:20.936070 | orchestrator | 5aec3c44ede1 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-17 04:59:20.936089 | orchestrator | d2aa67b9dbf5 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-17 04:59:20.936107 | orchestrator | 95aa67413611 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-04-17 04:59:20.936124 | orchestrator | a7fa41c38ab9 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-17 04:59:20.936142 | orchestrator | b666c4cdd0b6 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-04-17 04:59:20.936181 | orchestrator | 63c9ede23443 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-04-17 04:59:20.936202 | orchestrator | a84d8f82caab registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-04-17 04:59:20.936220 | orchestrator | 61dafa26aee4 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-04-17 04:59:20.936238 | orchestrator | 21fb809b7294 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-04-17 04:59:20.936256 | orchestrator | 3bb9c855ea8f registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-04-17 04:59:20.936275 | orchestrator | ce193a93e2cc registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-04-17 04:59:20.936303 | orchestrator | 2b1ccaf651e5 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-04-17 04:59:20.936314 | orchestrator | 307215706c00 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-17 04:59:20.936325 | orchestrator | 1e0e96e99b09 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-17 04:59:20.936336 | orchestrator | 7557140cff3f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-17 04:59:20.936346 | orchestrator | bc4d8e650738 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-04-17 04:59:20.936357 | orchestrator | 9abf6d0be536 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-04-17 04:59:20.936368 | orchestrator | 9d4e447d5d70 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-17 04:59:20.936385 | orchestrator | 17669a0dad20 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-04-17 04:59:20.936396 | orchestrator | 7bb1559fd00b registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-04-17 04:59:20.936407 | orchestrator | fb6ea2771e74 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-04-17 04:59:20.936417 | orchestrator | a343cc18a425 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-04-17 04:59:20.936428 | orchestrator | bcc4c1f75e5d registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-04-17 04:59:20.936439 | orchestrator | 338f5f9c3ae3 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-04-17 04:59:20.936449 | orchestrator | a85d403a09a5 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-04-17 04:59:20.936469 | orchestrator | 304ec30d5ff7 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-04-17 04:59:20.936480 | orchestrator | e85c10f40672 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-04-17 04:59:20.936491 | orchestrator | 061b3c7aa133 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-04-17 04:59:20.936508 | orchestrator | c77306b4462e registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-04-17 04:59:20.936519 | orchestrator | fb89b7b6388c registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-04-17 04:59:20.936532 | orchestrator | d06ee8bbd6c2 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) neutron_server 2026-04-17 04:59:20.936550 | orchestrator | ffa52e171a71 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-04-17 04:59:20.936569 | orchestrator | 5fbf28f1d76a registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-04-17 04:59:20.936586 | orchestrator | 8651e57cf62d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-04-17 04:59:20.936603 | orchestrator | f61144cfa3b3 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-04-17 04:59:20.936621 | orchestrator | fc4b1bdd586d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-2 2026-04-17 04:59:20.936639 | orchestrator | d635e3927a45 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-17 04:59:20.936657 | orchestrator | f2e2f728469b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-04-17 04:59:20.936676 | orchestrator | 57eb521f4c7d registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-17 04:59:20.936694 | orchestrator | 2cb335d46de3 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-17 04:59:20.936722 | orchestrator | 11bed07b28cb registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-17 04:59:20.936741 | orchestrator | d0365891259b registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-17 04:59:20.936760 | orchestrator | 950cbc4a6f5d registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-17 04:59:20.936777 | orchestrator | e19c85eacec1 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-17 04:59:20.936794 | orchestrator | e5deed18db73 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-17 04:59:20.936853 | orchestrator | dd22462d2328 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-17 04:59:20.936887 | orchestrator | 7ae42bd27324 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-17 04:59:20.936903 | orchestrator | 11aff18a1105 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-17 04:59:20.936923 | orchestrator | 6a6e6222c71d registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-04-17 04:59:20.936941 | orchestrator | 44826922579d registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-04-17 04:59:20.936956 | orchestrator | 9e5e1d22ea2d registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-04-17 04:59:20.936967 | orchestrator | 9c118ac1085f registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-04-17 04:59:20.936978 | orchestrator | 935141705be6 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-04-17 04:59:20.936988 | orchestrator | 38684f4cb8ed registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-04-17 04:59:20.936999 | orchestrator | 65fcc83f1d0e registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-17 04:59:20.937010 | orchestrator | 48fc028b6050 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-17 04:59:20.937021 | orchestrator | c395e647b350 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-17 04:59:21.351770 | orchestrator | 2026-04-17 04:59:21.351931 | orchestrator | ## Images @ testbed-node-2 2026-04-17 04:59:21.351951 | orchestrator | 2026-04-17 04:59:21.351967 | orchestrator | + echo 2026-04-17 04:59:21.351983 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-17 04:59:21.351999 | orchestrator | + echo 2026-04-17 04:59:21.352014 | orchestrator | + osism container testbed-node-2 images 2026-04-17 04:59:23.893404 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 04:59:23.893510 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-17 04:59:23.893539 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-17 04:59:23.893552 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-17 04:59:23.893563 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-17 04:59:23.893574 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-17 04:59:23.893585 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-17 04:59:23.893596 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-17 04:59:23.893627 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-17 04:59:23.893639 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-17 04:59:23.893649 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-17 04:59:23.893665 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-17 04:59:23.893676 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-17 04:59:23.893687 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-17 04:59:23.893698 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-17 04:59:23.893709 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-17 04:59:23.893719 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-17 04:59:23.893730 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-17 04:59:23.893741 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-17 04:59:23.893752 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-17 04:59:23.893763 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-17 04:59:23.893773 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-17 04:59:23.893784 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-17 04:59:23.893795 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-17 04:59:23.893806 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-17 04:59:23.893902 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-17 04:59:23.893914 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-17 04:59:23.893925 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-17 04:59:23.893936 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-17 04:59:23.893949 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-17 04:59:23.893962 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-17 04:59:23.893975 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-17 04:59:23.894005 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-17 04:59:23.894067 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-17 04:59:23.894083 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-17 04:59:23.894106 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-17 04:59:23.894119 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-17 04:59:23.894131 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-17 04:59:23.894144 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-17 04:59:23.894156 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-17 04:59:23.894167 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-17 04:59:23.894178 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-17 04:59:23.894189 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-17 04:59:23.894199 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-17 04:59:23.894219 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-17 04:59:23.894230 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-17 04:59:23.894241 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-17 04:59:23.894252 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-17 04:59:23.894263 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-17 04:59:23.894277 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-17 04:59:23.894296 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-17 04:59:23.894314 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-17 04:59:23.894332 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-17 04:59:23.894350 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-17 04:59:23.894369 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-17 04:59:23.894388 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-17 04:59:23.894403 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-17 04:59:23.894420 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-17 04:59:23.894440 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-17 04:59:23.894457 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-17 04:59:23.894475 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-17 04:59:23.894507 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-17 04:59:23.894520 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-17 04:59:23.894530 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-17 04:59:23.894551 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-17 04:59:23.894563 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-17 04:59:23.894573 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-17 04:59:23.894590 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-17 04:59:23.894601 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-17 04:59:23.894612 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-17 04:59:24.288421 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-17 04:59:24.296437 | orchestrator | + set -e 2026-04-17 04:59:24.296509 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 04:59:24.296535 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 04:59:24.296554 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 04:59:24.296572 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 04:59:24.297676 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 04:59:24.297709 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 04:59:24.297722 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 04:59:24.297740 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 04:59:24.297758 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 04:59:24.297783 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 04:59:24.297808 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 04:59:24.297898 | orchestrator | ++ export ARA=false 2026-04-17 04:59:24.297915 | orchestrator | ++ ARA=false 2026-04-17 04:59:24.297933 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 04:59:24.297952 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 04:59:24.297970 | orchestrator | ++ export TEMPEST=false 2026-04-17 04:59:24.297987 | orchestrator | ++ TEMPEST=false 2026-04-17 04:59:24.298004 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 04:59:24.298083 | orchestrator | ++ IS_ZUUL=true 2026-04-17 04:59:24.298100 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 04:59:24.298111 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 04:59:24.298122 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 04:59:24.298133 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 04:59:24.298143 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 04:59:24.298154 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 04:59:24.298166 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 04:59:24.298177 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 04:59:24.298188 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 04:59:24.298198 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 04:59:24.298209 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 04:59:24.298220 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-17 04:59:24.308577 | orchestrator | + set -e 2026-04-17 04:59:24.308658 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 04:59:24.308675 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 04:59:24.308688 | orchestrator | ++ INTERACTIVE=false 2026-04-17 04:59:24.308699 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 04:59:24.308710 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 04:59:24.308721 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 04:59:24.309713 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 04:59:24.316838 | orchestrator | 2026-04-17 04:59:24.316923 | orchestrator | # Ceph status 2026-04-17 04:59:24.316938 | orchestrator | 2026-04-17 04:59:24.316974 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 04:59:24.316987 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 04:59:24.316999 | orchestrator | + echo 2026-04-17 04:59:24.317010 | orchestrator | + echo '# Ceph status' 2026-04-17 04:59:24.317021 | orchestrator | + echo 2026-04-17 04:59:24.317032 | orchestrator | + ceph -s 2026-04-17 04:59:24.942887 | orchestrator | cluster: 2026-04-17 04:59:24.942988 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-17 04:59:24.943003 | orchestrator | health: HEALTH_OK 2026-04-17 04:59:24.943016 | orchestrator | 2026-04-17 04:59:24.943028 | orchestrator | services: 2026-04-17 04:59:24.943039 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 67m) 2026-04-17 04:59:24.943064 | orchestrator | mgr: testbed-node-2(active, since 55m), standbys: testbed-node-1, testbed-node-0 2026-04-17 04:59:24.943076 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-17 04:59:24.943087 | orchestrator | osd: 6 osds: 6 up (since 64m), 6 in (since 64m) 2026-04-17 04:59:24.943099 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-17 04:59:24.943110 | orchestrator | 2026-04-17 04:59:24.943120 | orchestrator | data: 2026-04-17 04:59:24.943131 | orchestrator | volumes: 1/1 healthy 2026-04-17 04:59:24.943143 | orchestrator | pools: 14 pools, 401 pgs 2026-04-17 04:59:24.943154 | orchestrator | objects: 554 objects, 2.2 GiB 2026-04-17 04:59:24.943164 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-17 04:59:24.943176 | orchestrator | pgs: 401 active+clean 2026-04-17 04:59:24.943187 | orchestrator | 2026-04-17 04:59:25.006400 | orchestrator | 2026-04-17 04:59:25.006470 | orchestrator | # Ceph versions 2026-04-17 04:59:25.006478 | orchestrator | 2026-04-17 04:59:25.006485 | orchestrator | + echo 2026-04-17 04:59:25.006492 | orchestrator | + echo '# Ceph versions' 2026-04-17 04:59:25.006499 | orchestrator | + echo 2026-04-17 04:59:25.006505 | orchestrator | + ceph versions 2026-04-17 04:59:25.683258 | orchestrator | { 2026-04-17 04:59:25.683358 | orchestrator | "mon": { 2026-04-17 04:59:25.683374 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-17 04:59:25.683387 | orchestrator | }, 2026-04-17 04:59:25.683399 | orchestrator | "mgr": { 2026-04-17 04:59:25.683410 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-17 04:59:25.683421 | orchestrator | }, 2026-04-17 04:59:25.683431 | orchestrator | "osd": { 2026-04-17 04:59:25.683442 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-17 04:59:25.683453 | orchestrator | }, 2026-04-17 04:59:25.683463 | orchestrator | "mds": { 2026-04-17 04:59:25.683474 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-17 04:59:25.683485 | orchestrator | }, 2026-04-17 04:59:25.683495 | orchestrator | "rgw": { 2026-04-17 04:59:25.683506 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-17 04:59:25.683516 | orchestrator | }, 2026-04-17 04:59:25.683527 | orchestrator | "overall": { 2026-04-17 04:59:25.683539 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-17 04:59:25.683550 | orchestrator | } 2026-04-17 04:59:25.683561 | orchestrator | } 2026-04-17 04:59:25.736454 | orchestrator | 2026-04-17 04:59:25.736548 | orchestrator | # Ceph OSD tree 2026-04-17 04:59:25.736562 | orchestrator | 2026-04-17 04:59:25.736575 | orchestrator | + echo 2026-04-17 04:59:25.736587 | orchestrator | + echo '# Ceph OSD tree' 2026-04-17 04:59:25.736599 | orchestrator | + echo 2026-04-17 04:59:25.736610 | orchestrator | + ceph osd df tree 2026-04-17 04:59:26.285943 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-17 04:59:26.286102 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 414 MiB 113 GiB 5.90 1.00 - root default 2026-04-17 04:59:26.286121 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-04-17 04:59:26.286133 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 66 MiB 19 GiB 5.34 0.90 190 up osd.0 2026-04-17 04:59:26.286144 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 78 MiB 19 GiB 6.49 1.10 202 up osd.4 2026-04-17 04:59:26.286156 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-04-17 04:59:26.286192 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.0 GiB 987 MiB 1 KiB 78 MiB 19 GiB 5.20 0.88 195 up osd.2 2026-04-17 04:59:26.286204 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 66 MiB 19 GiB 6.63 1.12 195 up osd.5 2026-04-17 04:59:26.286215 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-5 2026-04-17 04:59:26.286227 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.90 1.00 184 up osd.1 2026-04-17 04:59:26.286238 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.85 0.99 204 up osd.3 2026-04-17 04:59:26.286249 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 414 MiB 113 GiB 5.90 2026-04-17 04:59:26.286260 | orchestrator | MIN/MAX VAR: 0.88/1.12 STDDEV: 0.53 2026-04-17 04:59:26.335381 | orchestrator | 2026-04-17 04:59:26.335492 | orchestrator | # Ceph monitor status 2026-04-17 04:59:26.335517 | orchestrator | 2026-04-17 04:59:26.335538 | orchestrator | + echo 2026-04-17 04:59:26.335558 | orchestrator | + echo '# Ceph monitor status' 2026-04-17 04:59:26.335578 | orchestrator | + echo 2026-04-17 04:59:26.335598 | orchestrator | + ceph mon stat 2026-04-17 04:59:26.938990 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-17 04:59:26.987317 | orchestrator | 2026-04-17 04:59:26.987409 | orchestrator | # Ceph quorum status 2026-04-17 04:59:26.987426 | orchestrator | 2026-04-17 04:59:26.987438 | orchestrator | + echo 2026-04-17 04:59:26.987450 | orchestrator | + echo '# Ceph quorum status' 2026-04-17 04:59:26.987462 | orchestrator | + echo 2026-04-17 04:59:26.987719 | orchestrator | + ceph quorum_status 2026-04-17 04:59:26.987741 | orchestrator | + jq 2026-04-17 04:59:27.649727 | orchestrator | { 2026-04-17 04:59:27.649912 | orchestrator | "election_epoch": 8, 2026-04-17 04:59:27.649932 | orchestrator | "quorum": [ 2026-04-17 04:59:27.649944 | orchestrator | 0, 2026-04-17 04:59:27.649955 | orchestrator | 1, 2026-04-17 04:59:27.649966 | orchestrator | 2 2026-04-17 04:59:27.649976 | orchestrator | ], 2026-04-17 04:59:27.649987 | orchestrator | "quorum_names": [ 2026-04-17 04:59:27.649998 | orchestrator | "testbed-node-0", 2026-04-17 04:59:27.650009 | orchestrator | "testbed-node-1", 2026-04-17 04:59:27.650069 | orchestrator | "testbed-node-2" 2026-04-17 04:59:27.650082 | orchestrator | ], 2026-04-17 04:59:27.650093 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-17 04:59:27.650105 | orchestrator | "quorum_age": 4064, 2026-04-17 04:59:27.650116 | orchestrator | "features": { 2026-04-17 04:59:27.650127 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-17 04:59:27.650137 | orchestrator | "quorum_mon": [ 2026-04-17 04:59:27.650148 | orchestrator | "kraken", 2026-04-17 04:59:27.650159 | orchestrator | "luminous", 2026-04-17 04:59:27.650170 | orchestrator | "mimic", 2026-04-17 04:59:27.650181 | orchestrator | "osdmap-prune", 2026-04-17 04:59:27.650192 | orchestrator | "nautilus", 2026-04-17 04:59:27.650202 | orchestrator | "octopus", 2026-04-17 04:59:27.650213 | orchestrator | "pacific", 2026-04-17 04:59:27.650224 | orchestrator | "elector-pinging", 2026-04-17 04:59:27.650234 | orchestrator | "quincy", 2026-04-17 04:59:27.650245 | orchestrator | "reef" 2026-04-17 04:59:27.650256 | orchestrator | ] 2026-04-17 04:59:27.650266 | orchestrator | }, 2026-04-17 04:59:27.650279 | orchestrator | "monmap": { 2026-04-17 04:59:27.650292 | orchestrator | "epoch": 1, 2026-04-17 04:59:27.650305 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-17 04:59:27.650318 | orchestrator | "modified": "2026-04-17T03:51:24.672818Z", 2026-04-17 04:59:27.650331 | orchestrator | "created": "2026-04-17T03:51:24.672818Z", 2026-04-17 04:59:27.650344 | orchestrator | "min_mon_release": 18, 2026-04-17 04:59:27.650356 | orchestrator | "min_mon_release_name": "reef", 2026-04-17 04:59:27.650368 | orchestrator | "election_strategy": 1, 2026-04-17 04:59:27.650381 | orchestrator | "disallowed_leaders: ": "", 2026-04-17 04:59:27.650394 | orchestrator | "stretch_mode": false, 2026-04-17 04:59:27.650406 | orchestrator | "tiebreaker_mon": "", 2026-04-17 04:59:27.650444 | orchestrator | "removed_ranks: ": "", 2026-04-17 04:59:27.650457 | orchestrator | "features": { 2026-04-17 04:59:27.650470 | orchestrator | "persistent": [ 2026-04-17 04:59:27.650482 | orchestrator | "kraken", 2026-04-17 04:59:27.650493 | orchestrator | "luminous", 2026-04-17 04:59:27.650505 | orchestrator | "mimic", 2026-04-17 04:59:27.650517 | orchestrator | "osdmap-prune", 2026-04-17 04:59:27.650529 | orchestrator | "nautilus", 2026-04-17 04:59:27.650540 | orchestrator | "octopus", 2026-04-17 04:59:27.650552 | orchestrator | "pacific", 2026-04-17 04:59:27.650564 | orchestrator | "elector-pinging", 2026-04-17 04:59:27.650576 | orchestrator | "quincy", 2026-04-17 04:59:27.650588 | orchestrator | "reef" 2026-04-17 04:59:27.650599 | orchestrator | ], 2026-04-17 04:59:27.650611 | orchestrator | "optional": [] 2026-04-17 04:59:27.650624 | orchestrator | }, 2026-04-17 04:59:27.650635 | orchestrator | "mons": [ 2026-04-17 04:59:27.650645 | orchestrator | { 2026-04-17 04:59:27.650656 | orchestrator | "rank": 0, 2026-04-17 04:59:27.650666 | orchestrator | "name": "testbed-node-0", 2026-04-17 04:59:27.650677 | orchestrator | "public_addrs": { 2026-04-17 04:59:27.650687 | orchestrator | "addrvec": [ 2026-04-17 04:59:27.650698 | orchestrator | { 2026-04-17 04:59:27.650708 | orchestrator | "type": "v2", 2026-04-17 04:59:27.650719 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-17 04:59:27.650730 | orchestrator | "nonce": 0 2026-04-17 04:59:27.650741 | orchestrator | }, 2026-04-17 04:59:27.650751 | orchestrator | { 2026-04-17 04:59:27.650762 | orchestrator | "type": "v1", 2026-04-17 04:59:27.650772 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-17 04:59:27.650783 | orchestrator | "nonce": 0 2026-04-17 04:59:27.650793 | orchestrator | } 2026-04-17 04:59:27.650804 | orchestrator | ] 2026-04-17 04:59:27.650814 | orchestrator | }, 2026-04-17 04:59:27.650855 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-17 04:59:27.650866 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-17 04:59:27.650877 | orchestrator | "priority": 0, 2026-04-17 04:59:27.650887 | orchestrator | "weight": 0, 2026-04-17 04:59:27.650898 | orchestrator | "crush_location": "{}" 2026-04-17 04:59:27.650909 | orchestrator | }, 2026-04-17 04:59:27.650919 | orchestrator | { 2026-04-17 04:59:27.650930 | orchestrator | "rank": 1, 2026-04-17 04:59:27.650941 | orchestrator | "name": "testbed-node-1", 2026-04-17 04:59:27.651086 | orchestrator | "public_addrs": { 2026-04-17 04:59:27.651109 | orchestrator | "addrvec": [ 2026-04-17 04:59:27.651120 | orchestrator | { 2026-04-17 04:59:27.651131 | orchestrator | "type": "v2", 2026-04-17 04:59:27.651160 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-17 04:59:27.651171 | orchestrator | "nonce": 0 2026-04-17 04:59:27.651182 | orchestrator | }, 2026-04-17 04:59:27.651192 | orchestrator | { 2026-04-17 04:59:27.651203 | orchestrator | "type": "v1", 2026-04-17 04:59:27.651213 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-17 04:59:27.651224 | orchestrator | "nonce": 0 2026-04-17 04:59:27.651235 | orchestrator | } 2026-04-17 04:59:27.651245 | orchestrator | ] 2026-04-17 04:59:27.651255 | orchestrator | }, 2026-04-17 04:59:27.651266 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-17 04:59:27.651277 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-17 04:59:27.651287 | orchestrator | "priority": 0, 2026-04-17 04:59:27.651298 | orchestrator | "weight": 0, 2026-04-17 04:59:27.651308 | orchestrator | "crush_location": "{}" 2026-04-17 04:59:27.651319 | orchestrator | }, 2026-04-17 04:59:27.651329 | orchestrator | { 2026-04-17 04:59:27.651340 | orchestrator | "rank": 2, 2026-04-17 04:59:27.651351 | orchestrator | "name": "testbed-node-2", 2026-04-17 04:59:27.651361 | orchestrator | "public_addrs": { 2026-04-17 04:59:27.651372 | orchestrator | "addrvec": [ 2026-04-17 04:59:27.651383 | orchestrator | { 2026-04-17 04:59:27.651393 | orchestrator | "type": "v2", 2026-04-17 04:59:27.651404 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-17 04:59:27.651415 | orchestrator | "nonce": 0 2026-04-17 04:59:27.651425 | orchestrator | }, 2026-04-17 04:59:27.651436 | orchestrator | { 2026-04-17 04:59:27.651446 | orchestrator | "type": "v1", 2026-04-17 04:59:27.651457 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-17 04:59:27.651468 | orchestrator | "nonce": 0 2026-04-17 04:59:27.651478 | orchestrator | } 2026-04-17 04:59:27.651489 | orchestrator | ] 2026-04-17 04:59:27.651509 | orchestrator | }, 2026-04-17 04:59:27.651520 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-17 04:59:27.651531 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-17 04:59:27.651541 | orchestrator | "priority": 0, 2026-04-17 04:59:27.651552 | orchestrator | "weight": 0, 2026-04-17 04:59:27.651563 | orchestrator | "crush_location": "{}" 2026-04-17 04:59:27.651573 | orchestrator | } 2026-04-17 04:59:27.651584 | orchestrator | ] 2026-04-17 04:59:27.651594 | orchestrator | } 2026-04-17 04:59:27.651605 | orchestrator | } 2026-04-17 04:59:27.651628 | orchestrator | + echo 2026-04-17 04:59:27.651739 | orchestrator | 2026-04-17 04:59:27.651755 | orchestrator | # Ceph free space status 2026-04-17 04:59:27.651766 | orchestrator | 2026-04-17 04:59:27.651784 | orchestrator | + echo '# Ceph free space status' 2026-04-17 04:59:27.651796 | orchestrator | + echo 2026-04-17 04:59:27.651807 | orchestrator | + ceph df 2026-04-17 04:59:28.338086 | orchestrator | --- RAW STORAGE --- 2026-04-17 04:59:28.339107 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-17 04:59:28.339159 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-04-17 04:59:28.339169 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-04-17 04:59:28.339177 | orchestrator | 2026-04-17 04:59:28.339186 | orchestrator | --- POOLS --- 2026-04-17 04:59:28.339194 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-17 04:59:28.339204 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-17 04:59:28.339212 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-17 04:59:28.339220 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-17 04:59:28.339228 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-17 04:59:28.339236 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-17 04:59:28.339245 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-17 04:59:28.339253 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-17 04:59:28.339261 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-17 04:59:28.339268 | orchestrator | .rgw.root 9 32 2.6 KiB 6 48 KiB 0 53 GiB 2026-04-17 04:59:28.339276 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 04:59:28.339284 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 04:59:28.339292 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.91 35 GiB 2026-04-17 04:59:28.339300 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 04:59:28.339308 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 04:59:28.390229 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-17 04:59:28.456988 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 04:59:28.457103 | orchestrator | + osism apply facts 2026-04-17 04:59:40.817149 | orchestrator | 2026-04-17 04:59:40 | INFO  | Task 75de508f-5a6b-4a62-9fee-1799b00b3ac1 (facts) was prepared for execution. 2026-04-17 04:59:40.817268 | orchestrator | 2026-04-17 04:59:40 | INFO  | It takes a moment until task 75de508f-5a6b-4a62-9fee-1799b00b3ac1 (facts) has been started and output is visible here. 2026-04-17 04:59:54.920231 | orchestrator | 2026-04-17 04:59:54.920340 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-17 04:59:54.920357 | orchestrator | 2026-04-17 04:59:54.920372 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 04:59:54.920387 | orchestrator | Friday 17 April 2026 04:59:45 +0000 (0:00:00.295) 0:00:00.295 ********** 2026-04-17 04:59:54.920402 | orchestrator | ok: [testbed-manager] 2026-04-17 04:59:54.920419 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:59:54.920434 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:59:54.920449 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:59:54.920459 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:59:54.920468 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:59:54.920476 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:59:54.920510 | orchestrator | 2026-04-17 04:59:54.920520 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 04:59:54.920529 | orchestrator | Friday 17 April 2026 04:59:46 +0000 (0:00:01.223) 0:00:01.519 ********** 2026-04-17 04:59:54.920537 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:59:54.920546 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:59:54.920555 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:59:54.920563 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:59:54.920572 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:59:54.920580 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:59:54.920589 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:59:54.920598 | orchestrator | 2026-04-17 04:59:54.920606 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 04:59:54.920615 | orchestrator | 2026-04-17 04:59:54.920624 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 04:59:54.920632 | orchestrator | Friday 17 April 2026 04:59:48 +0000 (0:00:01.461) 0:00:02.981 ********** 2026-04-17 04:59:54.920641 | orchestrator | ok: [testbed-node-1] 2026-04-17 04:59:54.920649 | orchestrator | ok: [testbed-node-2] 2026-04-17 04:59:54.920658 | orchestrator | ok: [testbed-node-0] 2026-04-17 04:59:54.920666 | orchestrator | ok: [testbed-manager] 2026-04-17 04:59:54.920675 | orchestrator | ok: [testbed-node-3] 2026-04-17 04:59:54.920683 | orchestrator | ok: [testbed-node-5] 2026-04-17 04:59:54.920691 | orchestrator | ok: [testbed-node-4] 2026-04-17 04:59:54.920700 | orchestrator | 2026-04-17 04:59:54.920708 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 04:59:54.920717 | orchestrator | 2026-04-17 04:59:54.920725 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 04:59:54.920734 | orchestrator | Friday 17 April 2026 04:59:53 +0000 (0:00:05.412) 0:00:08.393 ********** 2026-04-17 04:59:54.920743 | orchestrator | skipping: [testbed-manager] 2026-04-17 04:59:54.920752 | orchestrator | skipping: [testbed-node-0] 2026-04-17 04:59:54.920760 | orchestrator | skipping: [testbed-node-1] 2026-04-17 04:59:54.920768 | orchestrator | skipping: [testbed-node-2] 2026-04-17 04:59:54.920777 | orchestrator | skipping: [testbed-node-3] 2026-04-17 04:59:54.920787 | orchestrator | skipping: [testbed-node-4] 2026-04-17 04:59:54.920797 | orchestrator | skipping: [testbed-node-5] 2026-04-17 04:59:54.920806 | orchestrator | 2026-04-17 04:59:54.920816 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 04:59:54.920827 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 04:59:54.920838 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 04:59:54.920861 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 04:59:54.920899 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 04:59:54.920909 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 04:59:54.920919 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 04:59:54.920929 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 04:59:54.920939 | orchestrator | 2026-04-17 04:59:54.920949 | orchestrator | 2026-04-17 04:59:54.920959 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 04:59:54.920969 | orchestrator | Friday 17 April 2026 04:59:54 +0000 (0:00:00.622) 0:00:09.015 ********** 2026-04-17 04:59:54.920986 | orchestrator | =============================================================================== 2026-04-17 04:59:54.920996 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.41s 2026-04-17 04:59:54.921006 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2026-04-17 04:59:54.921016 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2026-04-17 04:59:54.921029 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-04-17 04:59:55.277932 | orchestrator | + osism validate ceph-mons 2026-04-17 05:00:29.533393 | orchestrator | 2026-04-17 05:00:29.533469 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-17 05:00:29.533476 | orchestrator | 2026-04-17 05:00:29.533481 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 05:00:29.533486 | orchestrator | Friday 17 April 2026 05:00:12 +0000 (0:00:00.501) 0:00:00.501 ********** 2026-04-17 05:00:29.533491 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:29.533495 | orchestrator | 2026-04-17 05:00:29.533499 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 05:00:29.533503 | orchestrator | Friday 17 April 2026 05:00:13 +0000 (0:00:00.940) 0:00:01.441 ********** 2026-04-17 05:00:29.533507 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:29.533511 | orchestrator | 2026-04-17 05:00:29.533515 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 05:00:29.533523 | orchestrator | Friday 17 April 2026 05:00:14 +0000 (0:00:01.122) 0:00:02.564 ********** 2026-04-17 05:00:29.533533 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.533543 | orchestrator | 2026-04-17 05:00:29.533553 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-17 05:00:29.533563 | orchestrator | Friday 17 April 2026 05:00:14 +0000 (0:00:00.144) 0:00:02.708 ********** 2026-04-17 05:00:29.533572 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.533582 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:29.533592 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:29.533602 | orchestrator | 2026-04-17 05:00:29.533612 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-17 05:00:29.533621 | orchestrator | Friday 17 April 2026 05:00:15 +0000 (0:00:00.356) 0:00:03.065 ********** 2026-04-17 05:00:29.533631 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:29.533641 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:29.533650 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.533660 | orchestrator | 2026-04-17 05:00:29.533669 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-17 05:00:29.533679 | orchestrator | Friday 17 April 2026 05:00:16 +0000 (0:00:01.014) 0:00:04.080 ********** 2026-04-17 05:00:29.533689 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.533699 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:00:29.533708 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:00:29.533718 | orchestrator | 2026-04-17 05:00:29.533728 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-17 05:00:29.533737 | orchestrator | Friday 17 April 2026 05:00:16 +0000 (0:00:00.315) 0:00:04.395 ********** 2026-04-17 05:00:29.533747 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.533757 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:29.533766 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:29.533776 | orchestrator | 2026-04-17 05:00:29.533785 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 05:00:29.533795 | orchestrator | Friday 17 April 2026 05:00:17 +0000 (0:00:00.573) 0:00:04.969 ********** 2026-04-17 05:00:29.533804 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.533814 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:29.533824 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:29.533833 | orchestrator | 2026-04-17 05:00:29.533843 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-17 05:00:29.533853 | orchestrator | Friday 17 April 2026 05:00:17 +0000 (0:00:00.339) 0:00:05.308 ********** 2026-04-17 05:00:29.533883 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.533894 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:00:29.533904 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:00:29.533913 | orchestrator | 2026-04-17 05:00:29.533923 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-17 05:00:29.533994 | orchestrator | Friday 17 April 2026 05:00:17 +0000 (0:00:00.310) 0:00:05.619 ********** 2026-04-17 05:00:29.534009 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.534079 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:29.534091 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:29.534103 | orchestrator | 2026-04-17 05:00:29.534115 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 05:00:29.534126 | orchestrator | Friday 17 April 2026 05:00:18 +0000 (0:00:00.592) 0:00:06.212 ********** 2026-04-17 05:00:29.534139 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.534150 | orchestrator | 2026-04-17 05:00:29.534161 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 05:00:29.534173 | orchestrator | Friday 17 April 2026 05:00:18 +0000 (0:00:00.260) 0:00:06.473 ********** 2026-04-17 05:00:29.534184 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.534196 | orchestrator | 2026-04-17 05:00:29.534207 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 05:00:29.534219 | orchestrator | Friday 17 April 2026 05:00:18 +0000 (0:00:00.267) 0:00:06.740 ********** 2026-04-17 05:00:29.534230 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.534241 | orchestrator | 2026-04-17 05:00:29.534252 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:29.534264 | orchestrator | Friday 17 April 2026 05:00:19 +0000 (0:00:00.254) 0:00:06.994 ********** 2026-04-17 05:00:29.534276 | orchestrator | 2026-04-17 05:00:29.534287 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:29.534298 | orchestrator | Friday 17 April 2026 05:00:19 +0000 (0:00:00.070) 0:00:07.065 ********** 2026-04-17 05:00:29.534308 | orchestrator | 2026-04-17 05:00:29.534317 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:29.534327 | orchestrator | Friday 17 April 2026 05:00:19 +0000 (0:00:00.098) 0:00:07.163 ********** 2026-04-17 05:00:29.534336 | orchestrator | 2026-04-17 05:00:29.534346 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 05:00:29.534355 | orchestrator | Friday 17 April 2026 05:00:19 +0000 (0:00:00.098) 0:00:07.262 ********** 2026-04-17 05:00:29.534365 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.534374 | orchestrator | 2026-04-17 05:00:29.534384 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-17 05:00:29.534394 | orchestrator | Friday 17 April 2026 05:00:19 +0000 (0:00:00.305) 0:00:07.568 ********** 2026-04-17 05:00:29.534403 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.534413 | orchestrator | 2026-04-17 05:00:29.534440 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-17 05:00:29.534450 | orchestrator | Friday 17 April 2026 05:00:20 +0000 (0:00:00.251) 0:00:07.820 ********** 2026-04-17 05:00:29.534460 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.534469 | orchestrator | 2026-04-17 05:00:29.534479 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-17 05:00:29.534489 | orchestrator | Friday 17 April 2026 05:00:20 +0000 (0:00:00.137) 0:00:07.957 ********** 2026-04-17 05:00:29.534498 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:00:29.534508 | orchestrator | 2026-04-17 05:00:29.534522 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-17 05:00:29.534532 | orchestrator | Friday 17 April 2026 05:00:21 +0000 (0:00:01.649) 0:00:09.606 ********** 2026-04-17 05:00:29.534541 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.534551 | orchestrator | 2026-04-17 05:00:29.534561 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-17 05:00:29.534580 | orchestrator | Friday 17 April 2026 05:00:22 +0000 (0:00:00.609) 0:00:10.216 ********** 2026-04-17 05:00:29.534589 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.534599 | orchestrator | 2026-04-17 05:00:29.534624 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-17 05:00:29.534634 | orchestrator | Friday 17 April 2026 05:00:22 +0000 (0:00:00.126) 0:00:10.342 ********** 2026-04-17 05:00:29.534644 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.534653 | orchestrator | 2026-04-17 05:00:29.534663 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-17 05:00:29.534673 | orchestrator | Friday 17 April 2026 05:00:22 +0000 (0:00:00.329) 0:00:10.671 ********** 2026-04-17 05:00:29.534682 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.534692 | orchestrator | 2026-04-17 05:00:29.534701 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-17 05:00:29.534711 | orchestrator | Friday 17 April 2026 05:00:23 +0000 (0:00:00.368) 0:00:11.040 ********** 2026-04-17 05:00:29.534720 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.534730 | orchestrator | 2026-04-17 05:00:29.534739 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-17 05:00:29.534749 | orchestrator | Friday 17 April 2026 05:00:23 +0000 (0:00:00.131) 0:00:11.171 ********** 2026-04-17 05:00:29.534758 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.534768 | orchestrator | 2026-04-17 05:00:29.534778 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-17 05:00:29.534787 | orchestrator | Friday 17 April 2026 05:00:23 +0000 (0:00:00.162) 0:00:11.333 ********** 2026-04-17 05:00:29.534797 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.534806 | orchestrator | 2026-04-17 05:00:29.534816 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-17 05:00:29.534825 | orchestrator | Friday 17 April 2026 05:00:23 +0000 (0:00:00.127) 0:00:11.461 ********** 2026-04-17 05:00:29.534835 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:00:29.534844 | orchestrator | 2026-04-17 05:00:29.534854 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-17 05:00:29.534863 | orchestrator | Friday 17 April 2026 05:00:24 +0000 (0:00:01.315) 0:00:12.777 ********** 2026-04-17 05:00:29.534873 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.534882 | orchestrator | 2026-04-17 05:00:29.534892 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-17 05:00:29.534901 | orchestrator | Friday 17 April 2026 05:00:25 +0000 (0:00:00.324) 0:00:13.101 ********** 2026-04-17 05:00:29.534911 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.534920 | orchestrator | 2026-04-17 05:00:29.534930 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-17 05:00:29.534968 | orchestrator | Friday 17 April 2026 05:00:25 +0000 (0:00:00.143) 0:00:13.245 ********** 2026-04-17 05:00:29.534984 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:29.535002 | orchestrator | 2026-04-17 05:00:29.535016 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-17 05:00:29.535033 | orchestrator | Friday 17 April 2026 05:00:25 +0000 (0:00:00.146) 0:00:13.392 ********** 2026-04-17 05:00:29.535043 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.535053 | orchestrator | 2026-04-17 05:00:29.535067 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-17 05:00:29.535077 | orchestrator | Friday 17 April 2026 05:00:25 +0000 (0:00:00.151) 0:00:13.543 ********** 2026-04-17 05:00:29.535089 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.535105 | orchestrator | 2026-04-17 05:00:29.535121 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 05:00:29.535137 | orchestrator | Friday 17 April 2026 05:00:26 +0000 (0:00:00.391) 0:00:13.935 ********** 2026-04-17 05:00:29.535153 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:29.535168 | orchestrator | 2026-04-17 05:00:29.535183 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 05:00:29.535211 | orchestrator | Friday 17 April 2026 05:00:26 +0000 (0:00:00.290) 0:00:14.226 ********** 2026-04-17 05:00:29.535227 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:29.535242 | orchestrator | 2026-04-17 05:00:29.535257 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 05:00:29.535273 | orchestrator | Friday 17 April 2026 05:00:26 +0000 (0:00:00.248) 0:00:14.474 ********** 2026-04-17 05:00:29.535288 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:29.535305 | orchestrator | 2026-04-17 05:00:29.535321 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 05:00:29.535338 | orchestrator | Friday 17 April 2026 05:00:28 +0000 (0:00:01.981) 0:00:16.455 ********** 2026-04-17 05:00:29.535349 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:29.535358 | orchestrator | 2026-04-17 05:00:29.535368 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 05:00:29.535377 | orchestrator | Friday 17 April 2026 05:00:28 +0000 (0:00:00.336) 0:00:16.792 ********** 2026-04-17 05:00:29.535386 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:29.535396 | orchestrator | 2026-04-17 05:00:29.535416 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:32.652398 | orchestrator | Friday 17 April 2026 05:00:29 +0000 (0:00:00.284) 0:00:17.077 ********** 2026-04-17 05:00:32.652505 | orchestrator | 2026-04-17 05:00:32.652521 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:32.652533 | orchestrator | Friday 17 April 2026 05:00:29 +0000 (0:00:00.090) 0:00:17.167 ********** 2026-04-17 05:00:32.652544 | orchestrator | 2026-04-17 05:00:32.652555 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:32.652567 | orchestrator | Friday 17 April 2026 05:00:29 +0000 (0:00:00.075) 0:00:17.243 ********** 2026-04-17 05:00:32.652578 | orchestrator | 2026-04-17 05:00:32.652589 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 05:00:32.652599 | orchestrator | Friday 17 April 2026 05:00:29 +0000 (0:00:00.076) 0:00:17.320 ********** 2026-04-17 05:00:32.652610 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:32.652621 | orchestrator | 2026-04-17 05:00:32.652639 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 05:00:32.652669 | orchestrator | Friday 17 April 2026 05:00:31 +0000 (0:00:01.699) 0:00:19.020 ********** 2026-04-17 05:00:32.652688 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-17 05:00:32.652705 | orchestrator |  "msg": [ 2026-04-17 05:00:32.652726 | orchestrator |  "Validator run completed.", 2026-04-17 05:00:32.652746 | orchestrator |  "You can find the report file here:", 2026-04-17 05:00:32.652765 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-17T05:00:13+00:00-report.json", 2026-04-17 05:00:32.652785 | orchestrator |  "on the following host:", 2026-04-17 05:00:32.652804 | orchestrator |  "testbed-manager" 2026-04-17 05:00:32.652816 | orchestrator |  ] 2026-04-17 05:00:32.652827 | orchestrator | } 2026-04-17 05:00:32.652846 | orchestrator | 2026-04-17 05:00:32.652865 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:00:32.652884 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-17 05:00:32.652897 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:00:32.652908 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:00:32.652919 | orchestrator | 2026-04-17 05:00:32.652932 | orchestrator | 2026-04-17 05:00:32.653018 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:00:32.653054 | orchestrator | Friday 17 April 2026 05:00:32 +0000 (0:00:01.002) 0:00:20.022 ********** 2026-04-17 05:00:32.653068 | orchestrator | =============================================================================== 2026-04-17 05:00:32.653081 | orchestrator | Aggregate test results step one ----------------------------------------- 1.98s 2026-04-17 05:00:32.653093 | orchestrator | Write report file ------------------------------------------------------- 1.70s 2026-04-17 05:00:32.653106 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.65s 2026-04-17 05:00:32.653119 | orchestrator | Gather status data ------------------------------------------------------ 1.32s 2026-04-17 05:00:32.653132 | orchestrator | Create report output directory ------------------------------------------ 1.12s 2026-04-17 05:00:32.653145 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2026-04-17 05:00:32.653157 | orchestrator | Print report file information ------------------------------------------- 1.00s 2026-04-17 05:00:32.653170 | orchestrator | Get timestamp for report file ------------------------------------------- 0.94s 2026-04-17 05:00:32.653197 | orchestrator | Set quorum test data ---------------------------------------------------- 0.61s 2026-04-17 05:00:32.653210 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.59s 2026-04-17 05:00:32.653222 | orchestrator | Set test result to passed if container is existing ---------------------- 0.57s 2026-04-17 05:00:32.653235 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.39s 2026-04-17 05:00:32.653247 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.37s 2026-04-17 05:00:32.653260 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2026-04-17 05:00:32.653273 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-04-17 05:00:32.653285 | orchestrator | Aggregate test results step two ----------------------------------------- 0.34s 2026-04-17 05:00:32.653295 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-04-17 05:00:32.653306 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-04-17 05:00:32.653317 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-04-17 05:00:32.653327 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2026-04-17 05:00:33.024348 | orchestrator | + osism validate ceph-mgrs 2026-04-17 05:00:54.810746 | orchestrator | 2026-04-17 05:00:54.810857 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-17 05:00:54.810871 | orchestrator | 2026-04-17 05:00:54.810878 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 05:00:54.810886 | orchestrator | Friday 17 April 2026 05:00:39 +0000 (0:00:00.467) 0:00:00.467 ********** 2026-04-17 05:00:54.810893 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:54.810900 | orchestrator | 2026-04-17 05:00:54.810906 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 05:00:54.810913 | orchestrator | Friday 17 April 2026 05:00:40 +0000 (0:00:00.881) 0:00:01.349 ********** 2026-04-17 05:00:54.810920 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:54.810926 | orchestrator | 2026-04-17 05:00:54.810932 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 05:00:54.810938 | orchestrator | Friday 17 April 2026 05:00:41 +0000 (0:00:01.067) 0:00:02.417 ********** 2026-04-17 05:00:54.810945 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.810952 | orchestrator | 2026-04-17 05:00:54.810958 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-17 05:00:54.810963 | orchestrator | Friday 17 April 2026 05:00:41 +0000 (0:00:00.136) 0:00:02.553 ********** 2026-04-17 05:00:54.810969 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.810975 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:54.811040 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:54.811048 | orchestrator | 2026-04-17 05:00:54.811079 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-17 05:00:54.811086 | orchestrator | Friday 17 April 2026 05:00:42 +0000 (0:00:00.301) 0:00:02.854 ********** 2026-04-17 05:00:54.811092 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:54.811098 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.811104 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:54.811110 | orchestrator | 2026-04-17 05:00:54.811116 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-17 05:00:54.811122 | orchestrator | Friday 17 April 2026 05:00:43 +0000 (0:00:01.001) 0:00:03.856 ********** 2026-04-17 05:00:54.811128 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811134 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:00:54.811140 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:00:54.811146 | orchestrator | 2026-04-17 05:00:54.811152 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-17 05:00:54.811158 | orchestrator | Friday 17 April 2026 05:00:43 +0000 (0:00:00.314) 0:00:04.170 ********** 2026-04-17 05:00:54.811164 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.811171 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:54.811177 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:54.811183 | orchestrator | 2026-04-17 05:00:54.811189 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 05:00:54.811196 | orchestrator | Friday 17 April 2026 05:00:44 +0000 (0:00:00.540) 0:00:04.711 ********** 2026-04-17 05:00:54.811202 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.811208 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:54.811215 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:54.811221 | orchestrator | 2026-04-17 05:00:54.811228 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-17 05:00:54.811235 | orchestrator | Friday 17 April 2026 05:00:44 +0000 (0:00:00.333) 0:00:05.045 ********** 2026-04-17 05:00:54.811242 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811250 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:00:54.811257 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:00:54.811263 | orchestrator | 2026-04-17 05:00:54.811270 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-17 05:00:54.811277 | orchestrator | Friday 17 April 2026 05:00:44 +0000 (0:00:00.325) 0:00:05.370 ********** 2026-04-17 05:00:54.811284 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.811291 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:00:54.811299 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:00:54.811305 | orchestrator | 2026-04-17 05:00:54.811312 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 05:00:54.811319 | orchestrator | Friday 17 April 2026 05:00:45 +0000 (0:00:00.580) 0:00:05.951 ********** 2026-04-17 05:00:54.811325 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811332 | orchestrator | 2026-04-17 05:00:54.811338 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 05:00:54.811345 | orchestrator | Friday 17 April 2026 05:00:45 +0000 (0:00:00.249) 0:00:06.201 ********** 2026-04-17 05:00:54.811352 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811358 | orchestrator | 2026-04-17 05:00:54.811364 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 05:00:54.811370 | orchestrator | Friday 17 April 2026 05:00:45 +0000 (0:00:00.282) 0:00:06.483 ********** 2026-04-17 05:00:54.811377 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811384 | orchestrator | 2026-04-17 05:00:54.811391 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:54.811398 | orchestrator | Friday 17 April 2026 05:00:46 +0000 (0:00:00.254) 0:00:06.737 ********** 2026-04-17 05:00:54.811405 | orchestrator | 2026-04-17 05:00:54.811412 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:54.811420 | orchestrator | Friday 17 April 2026 05:00:46 +0000 (0:00:00.073) 0:00:06.811 ********** 2026-04-17 05:00:54.811427 | orchestrator | 2026-04-17 05:00:54.811442 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:54.811450 | orchestrator | Friday 17 April 2026 05:00:46 +0000 (0:00:00.078) 0:00:06.889 ********** 2026-04-17 05:00:54.811456 | orchestrator | 2026-04-17 05:00:54.811463 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 05:00:54.811470 | orchestrator | Friday 17 April 2026 05:00:46 +0000 (0:00:00.077) 0:00:06.966 ********** 2026-04-17 05:00:54.811476 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811483 | orchestrator | 2026-04-17 05:00:54.811491 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-17 05:00:54.811501 | orchestrator | Friday 17 April 2026 05:00:46 +0000 (0:00:00.257) 0:00:07.223 ********** 2026-04-17 05:00:54.811507 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811514 | orchestrator | 2026-04-17 05:00:54.811540 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-17 05:00:54.811547 | orchestrator | Friday 17 April 2026 05:00:46 +0000 (0:00:00.246) 0:00:07.470 ********** 2026-04-17 05:00:54.811554 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.811561 | orchestrator | 2026-04-17 05:00:54.811567 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-17 05:00:54.811574 | orchestrator | Friday 17 April 2026 05:00:46 +0000 (0:00:00.128) 0:00:07.599 ********** 2026-04-17 05:00:54.811581 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:00:54.811588 | orchestrator | 2026-04-17 05:00:54.811594 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-17 05:00:54.811602 | orchestrator | Friday 17 April 2026 05:00:48 +0000 (0:00:01.869) 0:00:09.468 ********** 2026-04-17 05:00:54.811609 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.811615 | orchestrator | 2026-04-17 05:00:54.811622 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-17 05:00:54.811628 | orchestrator | Friday 17 April 2026 05:00:49 +0000 (0:00:00.503) 0:00:09.972 ********** 2026-04-17 05:00:54.811635 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.811641 | orchestrator | 2026-04-17 05:00:54.811648 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-17 05:00:54.811655 | orchestrator | Friday 17 April 2026 05:00:49 +0000 (0:00:00.339) 0:00:10.312 ********** 2026-04-17 05:00:54.811661 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811667 | orchestrator | 2026-04-17 05:00:54.811674 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-17 05:00:54.811681 | orchestrator | Friday 17 April 2026 05:00:49 +0000 (0:00:00.155) 0:00:10.467 ********** 2026-04-17 05:00:54.811688 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:00:54.811695 | orchestrator | 2026-04-17 05:00:54.811702 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 05:00:54.811708 | orchestrator | Friday 17 April 2026 05:00:50 +0000 (0:00:00.153) 0:00:10.621 ********** 2026-04-17 05:00:54.811715 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:54.811721 | orchestrator | 2026-04-17 05:00:54.811728 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 05:00:54.811734 | orchestrator | Friday 17 April 2026 05:00:50 +0000 (0:00:00.290) 0:00:10.911 ********** 2026-04-17 05:00:54.811741 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:00:54.811747 | orchestrator | 2026-04-17 05:00:54.811754 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 05:00:54.811776 | orchestrator | Friday 17 April 2026 05:00:50 +0000 (0:00:00.257) 0:00:11.168 ********** 2026-04-17 05:00:54.811831 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:54.811839 | orchestrator | 2026-04-17 05:00:54.811845 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 05:00:54.811852 | orchestrator | Friday 17 April 2026 05:00:51 +0000 (0:00:01.311) 0:00:12.480 ********** 2026-04-17 05:00:54.811859 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:54.811866 | orchestrator | 2026-04-17 05:00:54.811880 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 05:00:54.811887 | orchestrator | Friday 17 April 2026 05:00:52 +0000 (0:00:00.289) 0:00:12.769 ********** 2026-04-17 05:00:54.811894 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:54.811900 | orchestrator | 2026-04-17 05:00:54.811907 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:54.811914 | orchestrator | Friday 17 April 2026 05:00:52 +0000 (0:00:00.279) 0:00:13.049 ********** 2026-04-17 05:00:54.811920 | orchestrator | 2026-04-17 05:00:54.811927 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:54.811934 | orchestrator | Friday 17 April 2026 05:00:52 +0000 (0:00:00.074) 0:00:13.123 ********** 2026-04-17 05:00:54.811940 | orchestrator | 2026-04-17 05:00:54.811947 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:00:54.811953 | orchestrator | Friday 17 April 2026 05:00:52 +0000 (0:00:00.073) 0:00:13.197 ********** 2026-04-17 05:00:54.811960 | orchestrator | 2026-04-17 05:00:54.811967 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 05:00:54.811973 | orchestrator | Friday 17 April 2026 05:00:52 +0000 (0:00:00.303) 0:00:13.500 ********** 2026-04-17 05:00:54.811997 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 05:00:54.812004 | orchestrator | 2026-04-17 05:00:54.812011 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 05:00:54.812018 | orchestrator | Friday 17 April 2026 05:00:54 +0000 (0:00:01.416) 0:00:14.917 ********** 2026-04-17 05:00:54.812030 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-17 05:00:54.812037 | orchestrator |  "msg": [ 2026-04-17 05:00:54.812044 | orchestrator |  "Validator run completed.", 2026-04-17 05:00:54.812051 | orchestrator |  "You can find the report file here:", 2026-04-17 05:00:54.812058 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-17T05:00:40+00:00-report.json", 2026-04-17 05:00:54.812066 | orchestrator |  "on the following host:", 2026-04-17 05:00:54.812073 | orchestrator |  "testbed-manager" 2026-04-17 05:00:54.812080 | orchestrator |  ] 2026-04-17 05:00:54.812087 | orchestrator | } 2026-04-17 05:00:54.812094 | orchestrator | 2026-04-17 05:00:54.812100 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:00:54.812109 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 05:00:54.812117 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:00:54.812132 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:00:55.244547 | orchestrator | 2026-04-17 05:00:55.244648 | orchestrator | 2026-04-17 05:00:55.244663 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:00:55.244678 | orchestrator | Friday 17 April 2026 05:00:54 +0000 (0:00:00.473) 0:00:15.390 ********** 2026-04-17 05:00:55.244689 | orchestrator | =============================================================================== 2026-04-17 05:00:55.244700 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.87s 2026-04-17 05:00:55.244711 | orchestrator | Write report file ------------------------------------------------------- 1.42s 2026-04-17 05:00:55.244722 | orchestrator | Aggregate test results step one ----------------------------------------- 1.31s 2026-04-17 05:00:55.244733 | orchestrator | Create report output directory ------------------------------------------ 1.07s 2026-04-17 05:00:55.244744 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2026-04-17 05:00:55.244755 | orchestrator | Get timestamp for report file ------------------------------------------- 0.88s 2026-04-17 05:00:55.244766 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.58s 2026-04-17 05:00:55.244799 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-04-17 05:00:55.244811 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.50s 2026-04-17 05:00:55.244822 | orchestrator | Print report file information ------------------------------------------- 0.47s 2026-04-17 05:00:55.244833 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2026-04-17 05:00:55.244844 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.34s 2026-04-17 05:00:55.244855 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-04-17 05:00:55.244865 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2026-04-17 05:00:55.244876 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-04-17 05:00:55.244887 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-04-17 05:00:55.244898 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-04-17 05:00:55.244909 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-04-17 05:00:55.244919 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-04-17 05:00:55.244930 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-04-17 05:00:55.636205 | orchestrator | + osism validate ceph-osds 2026-04-17 05:01:17.365334 | orchestrator | 2026-04-17 05:01:17.365435 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-17 05:01:17.365449 | orchestrator | 2026-04-17 05:01:17.365459 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 05:01:17.365469 | orchestrator | Friday 17 April 2026 05:01:12 +0000 (0:00:00.447) 0:00:00.447 ********** 2026-04-17 05:01:17.365478 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 05:01:17.365488 | orchestrator | 2026-04-17 05:01:17.365496 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 05:01:17.365505 | orchestrator | Friday 17 April 2026 05:01:13 +0000 (0:00:00.899) 0:00:01.347 ********** 2026-04-17 05:01:17.365514 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 05:01:17.365522 | orchestrator | 2026-04-17 05:01:17.365531 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 05:01:17.365540 | orchestrator | Friday 17 April 2026 05:01:13 +0000 (0:00:00.582) 0:00:01.929 ********** 2026-04-17 05:01:17.365548 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 05:01:17.365557 | orchestrator | 2026-04-17 05:01:17.365566 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 05:01:17.365575 | orchestrator | Friday 17 April 2026 05:01:14 +0000 (0:00:00.759) 0:00:02.688 ********** 2026-04-17 05:01:17.365583 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:17.365593 | orchestrator | 2026-04-17 05:01:17.365603 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-17 05:01:17.365612 | orchestrator | Friday 17 April 2026 05:01:14 +0000 (0:00:00.141) 0:00:02.830 ********** 2026-04-17 05:01:17.365621 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:17.365630 | orchestrator | 2026-04-17 05:01:17.365639 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-17 05:01:17.365662 | orchestrator | Friday 17 April 2026 05:01:15 +0000 (0:00:00.137) 0:00:02.967 ********** 2026-04-17 05:01:17.365671 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:17.365680 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:17.365688 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:17.365697 | orchestrator | 2026-04-17 05:01:17.365706 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-17 05:01:17.365715 | orchestrator | Friday 17 April 2026 05:01:15 +0000 (0:00:00.324) 0:00:03.292 ********** 2026-04-17 05:01:17.365723 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:17.365750 | orchestrator | 2026-04-17 05:01:17.365759 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-17 05:01:17.365768 | orchestrator | Friday 17 April 2026 05:01:15 +0000 (0:00:00.152) 0:00:03.444 ********** 2026-04-17 05:01:17.365777 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:17.365785 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:17.365794 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:17.365802 | orchestrator | 2026-04-17 05:01:17.365811 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-17 05:01:17.365819 | orchestrator | Friday 17 April 2026 05:01:15 +0000 (0:00:00.342) 0:00:03.787 ********** 2026-04-17 05:01:17.365828 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:17.365836 | orchestrator | 2026-04-17 05:01:17.365845 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 05:01:17.365893 | orchestrator | Friday 17 April 2026 05:01:16 +0000 (0:00:00.889) 0:00:04.677 ********** 2026-04-17 05:01:17.365904 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:17.365914 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:17.365924 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:17.365940 | orchestrator | 2026-04-17 05:01:17.365955 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-17 05:01:17.365969 | orchestrator | Friday 17 April 2026 05:01:17 +0000 (0:00:00.331) 0:00:05.008 ********** 2026-04-17 05:01:17.365990 | orchestrator | skipping: [testbed-node-3] => (item={'id': '565a3cb715ca0fdce904470f539f9db845131d78d743301ffcda269e4515b5aa', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-17 05:01:17.366010 | orchestrator | skipping: [testbed-node-3] => (item={'id': '67964b73713c96534354bab0c5cbb79a60f3012dbecb5d214b3e70d52807d722', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-17 05:01:17.366110 | orchestrator | skipping: [testbed-node-3] => (item={'id': '733630cdee87151079a2fff7356f5c39e10f314d5f218bfe15e54a3f3c890039', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-17 05:01:17.366129 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9d8479f602fa277a5ce115c3bf6559176b05f5f72b441f99b448966f6dbf37fd', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-17 05:01:17.366144 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5b54ee693f692e6b25fb021c9325c01d033c7eed5bb207aef3d48ff96da5403c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-17 05:01:17.366183 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0cfb7b3bdb1a378eb59cd7acbe0d6ffb646fb1ab58aa737f08b37bc00423689b', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-17 05:01:17.366195 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2550c0581b646f6613083b977e60be96aadb7b77c188668d4a9ae37f44e00fbc', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-04-17 05:01:17.366206 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e0f886a96f54aca7f84c7bed21205698192a4c7aeb397258fe6ac7f720a45448', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-04-17 05:01:17.366217 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd2975ce901e9a6894d6bbddf26b3734222b5d010157608f859283d3a361bbcee', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.366242 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aeceef53254688ca821f6909fea46ea7a5bdb413c6ebcd756a71966541843887', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.366252 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6fe718a920a3c19c20e497e460030eceef64f738f869bafaa57a3531b4fe7317', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.366262 | orchestrator | ok: [testbed-node-3] => (item={'id': '865e35f712ee45f14147fd8d23a0032305bf8c082d1322d94f803f9e29ded3ff', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-17 05:01:17.366272 | orchestrator | ok: [testbed-node-3] => (item={'id': '3ee4d55558bd466708406c7008b56834ce19d1b1975bfe91a64010ff729a1726', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-17 05:01:17.366281 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e346b3cbd5d2c6b193e0ca8e0b6567481314eff75d71471bfe445bf32065e3d4', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.366290 | orchestrator | skipping: [testbed-node-3] => (item={'id': '32380b47e185d8ab4426c9319f3b1f3d255edf846998202790c1a4e132c854d5', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 05:01:17.366299 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4ec10772672a5073d7f2cc7a57082453684d08328411b7e241d023e37ec5bae9', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 05:01:17.366308 | orchestrator | skipping: [testbed-node-3] => (item={'id': '10a95827581768c7a9c47959cca419d8a39b6a0bb5f67aefb3f359799b5abb79', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:17.366317 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1c88ed9c97fde5aa42e49865dba422f57034feebfa2db04e2ad566eed6971733', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:17.366326 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4a37aa99fc1bf4029e88053451e85815aca1c844b8b682ba09197aced9e17d3b', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:17.366335 | orchestrator | skipping: [testbed-node-4] => (item={'id': '07f60bf20782108290dcb08a14cce0305d6baa86f5eeedccebb2457a29bf681d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-17 05:01:17.366350 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b9bd6f9cd6d221c54e06dcc0a814d2ec0c155f095dbcb567c17cd87be88401b9', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-17 05:01:17.648395 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ab30557211f4ac04025402186a70cfc27b5703996c8f25f89ea95138fccf4e52', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-17 05:01:17.648522 | orchestrator | skipping: [testbed-node-4] => (item={'id': '28e7c1c5670e0707f51ff76cd0f6761b67af35457648a6ac3b0dd68e223dae05', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-17 05:01:17.648538 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e80fca0a6952e51ff90e9bf796fafadd1cbe2fd8bfb9b1ec945a03c782ed24be', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-17 05:01:17.648553 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dece4b0d417a692077f56626fd66d5aa221869d2c0dab3e809ce6b6c2623c44a', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-17 05:01:17.648565 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c15232b4faaabc90220080ff0e7da8590d0a48ca5662c3b7b6297f7c5d1fdeff', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-04-17 05:01:17.648619 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd69a4f43d720e0d8558bea6dd30cfbb632cd570745497f4727c885d69a5b7823', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-04-17 05:01:17.648632 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd33c29f10b48f870afad9791f5ca30f8a7b536d4943ff0ab5b783bbab751658c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.648645 | orchestrator | skipping: [testbed-node-4] => (item={'id': '902b0a83ce238f731bda4aec913d64211cc8453281ee5bc372254810f44c15e9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.648656 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5dd7d958b89613978d418ac99a5900a59b82d669a25fb17d22ed6e7b7cb71a0f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.648669 | orchestrator | ok: [testbed-node-4] => (item={'id': '8b89264f8fcaa1b3c42b9581df95d817cd2b457231208261782430a582174b11', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-17 05:01:17.648681 | orchestrator | ok: [testbed-node-4] => (item={'id': '69a693cfa53791930f8f51734b01248d3f43d665fb1780b92197e2239bf21ce0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-17 05:01:17.648692 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8de2399d54d41ae936f8d907a9162c9ae429790600bc4ef9bec89843e253edb', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.648703 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5020c4769bce480441501c9944453bcc5f366dd0fd78c52aac11f2c35eb78a34', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 05:01:17.648714 | orchestrator | skipping: [testbed-node-4] => (item={'id': '09d21208cb37ee7608348b6f8ae953373791c1752c7e88d7ea089b2489547958', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 05:01:17.648744 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eb4e0029876d95921ced89d92b6ffae43770989a6eeff11615d7fa6412e8575d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:17.648764 | orchestrator | skipping: [testbed-node-4] => (item={'id': '39f6724e04bb5a65723f2e1b0896148ea37f038383b3496fcfa338f29f07c596', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:17.648775 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ed1e29f4cc854b311ae3480be3b4b1bc75ed3fb1e4a6b99d987fc6866b7313f0', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:17.648786 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6c03af54368aff021fcfba26c72a959f742375e8ee6b4ba511579bb805fef74c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-17 05:01:17.648802 | orchestrator | skipping: [testbed-node-5] => (item={'id': '27955829dcc1c1e1bb8398a7cc4595efc223bcd4a87fa95bcc961e3519b326c8', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-17 05:01:17.648814 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'db464a6a0c3c5c621beab8a2f43983a81102c7badb80240124282b978274b9fb', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-17 05:01:17.648825 | orchestrator | skipping: [testbed-node-5] => (item={'id': '17ef54ce5fc1a81fef6562aa43d7bbd7350607815a7eed82c220c8e543dffb9b', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-17 05:01:17.648836 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd516bc7543ab8fc7e62405f97858f8052d87fc2d7ad3ff6d1bb5a5abce96328c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-17 05:01:17.648847 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd6eaf69c02933e3550e6db7bbf9b19bf822410b39cab1b4d2219e6a18282c10e', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-17 05:01:17.648857 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f5a156fe9de3df959dcc61f8f5bf2bee0fccc1f3a568f7ac8b5e8475d394e123', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-04-17 05:01:17.648868 | orchestrator | skipping: [testbed-node-5] => (item={'id': '46b44344b50ce180ade3d27ae75de2ed506261a6a127311554366d0180e775ce', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-04-17 05:01:17.648879 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ca68533e3437ae0ddf9c29b2d6bf6c30648ddf91c37a37f9be6a5ff1d19a965b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.648890 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a81e69b8f5b741d1dff7830ffe1dba55c2020279ce96bb3963b90f782f0bd4c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.648902 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a980f6a7be8eb9c0f275bfc705a2882f50f7eee4066ce79d376b5ffd2cbebcae', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:17.648922 | orchestrator | ok: [testbed-node-5] => (item={'id': '8db4ab90b5bd953dd6be314361e99b4d3e738f180e8d522183bd6adfcd401431', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-17 05:01:17.648942 | orchestrator | ok: [testbed-node-5] => (item={'id': '102b2e0ad47034dba258af74dee95a20c1884b44051999558547625264774f0c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-17 05:01:29.573673 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b5ec4872cae7518289db00da8eea49fbc06db254b1f3b0bd0fcd8e6564b9cc96', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 05:01:29.573783 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2c44e7619b788ee70662268f7cac5bfbd6fc7538f461203225d0c72118620a70', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 05:01:29.573801 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ab83b89bbd75a4b96338d01ceac6d94d1435df778f515121c538560e295503c7', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 05:01:29.573829 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2713c707941a0acaa1df813f02c6c01e6f05e017c37e9dde9685bac09a1e551d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:29.573842 | orchestrator | skipping: [testbed-node-5] => (item={'id': '97c4e337408f68de83f6a14d2cd2143cfe60c02382d5730179dd82fc276dc8ce', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:29.573854 | orchestrator | skipping: [testbed-node-5] => (item={'id': '020ec934888c58d420941142c9b5ea9347d28496959783fd9c479dbac309698a', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 05:01:29.573866 | orchestrator | 2026-04-17 05:01:29.573879 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-17 05:01:29.573891 | orchestrator | Friday 17 April 2026 05:01:17 +0000 (0:00:00.556) 0:00:05.565 ********** 2026-04-17 05:01:29.573903 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.573915 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:29.573926 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:29.573936 | orchestrator | 2026-04-17 05:01:29.573947 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-17 05:01:29.573958 | orchestrator | Friday 17 April 2026 05:01:17 +0000 (0:00:00.315) 0:00:05.881 ********** 2026-04-17 05:01:29.573970 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.573981 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:29.573992 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:29.574003 | orchestrator | 2026-04-17 05:01:29.574014 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-17 05:01:29.574102 | orchestrator | Friday 17 April 2026 05:01:18 +0000 (0:00:00.559) 0:00:06.440 ********** 2026-04-17 05:01:29.574114 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.574125 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:29.574136 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:29.574147 | orchestrator | 2026-04-17 05:01:29.574158 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 05:01:29.574168 | orchestrator | Friday 17 April 2026 05:01:18 +0000 (0:00:00.331) 0:00:06.771 ********** 2026-04-17 05:01:29.574201 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.574213 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:29.574227 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:29.574239 | orchestrator | 2026-04-17 05:01:29.574251 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-17 05:01:29.574264 | orchestrator | Friday 17 April 2026 05:01:19 +0000 (0:00:00.296) 0:00:07.068 ********** 2026-04-17 05:01:29.574277 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-17 05:01:29.574291 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-17 05:01:29.574304 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.574317 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-17 05:01:29.574329 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-17 05:01:29.574342 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:29.574354 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-17 05:01:29.574367 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-17 05:01:29.574380 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:29.574392 | orchestrator | 2026-04-17 05:01:29.574405 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-17 05:01:29.574417 | orchestrator | Friday 17 April 2026 05:01:19 +0000 (0:00:00.337) 0:00:07.405 ********** 2026-04-17 05:01:29.574430 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.574443 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:29.574457 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:29.574469 | orchestrator | 2026-04-17 05:01:29.574482 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-17 05:01:29.574495 | orchestrator | Friday 17 April 2026 05:01:20 +0000 (0:00:00.555) 0:00:07.961 ********** 2026-04-17 05:01:29.574508 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.574537 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:29.574550 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:29.574563 | orchestrator | 2026-04-17 05:01:29.574575 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-17 05:01:29.574586 | orchestrator | Friday 17 April 2026 05:01:20 +0000 (0:00:00.312) 0:00:08.273 ********** 2026-04-17 05:01:29.574597 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.574608 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:29.574619 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:29.574630 | orchestrator | 2026-04-17 05:01:29.574649 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-17 05:01:29.574668 | orchestrator | Friday 17 April 2026 05:01:20 +0000 (0:00:00.356) 0:00:08.630 ********** 2026-04-17 05:01:29.574686 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.574704 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:29.574724 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:29.574743 | orchestrator | 2026-04-17 05:01:29.574761 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 05:01:29.574777 | orchestrator | Friday 17 April 2026 05:01:21 +0000 (0:00:00.330) 0:00:08.961 ********** 2026-04-17 05:01:29.574788 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.574799 | orchestrator | 2026-04-17 05:01:29.574810 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 05:01:29.574820 | orchestrator | Friday 17 April 2026 05:01:21 +0000 (0:00:00.806) 0:00:09.767 ********** 2026-04-17 05:01:29.574896 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.574910 | orchestrator | 2026-04-17 05:01:29.574921 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 05:01:29.574932 | orchestrator | Friday 17 April 2026 05:01:22 +0000 (0:00:00.287) 0:00:10.054 ********** 2026-04-17 05:01:29.574955 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.574966 | orchestrator | 2026-04-17 05:01:29.574977 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:01:29.574988 | orchestrator | Friday 17 April 2026 05:01:22 +0000 (0:00:00.275) 0:00:10.330 ********** 2026-04-17 05:01:29.574999 | orchestrator | 2026-04-17 05:01:29.575010 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:01:29.575021 | orchestrator | Friday 17 April 2026 05:01:22 +0000 (0:00:00.089) 0:00:10.420 ********** 2026-04-17 05:01:29.575032 | orchestrator | 2026-04-17 05:01:29.575043 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:01:29.575077 | orchestrator | Friday 17 April 2026 05:01:22 +0000 (0:00:00.076) 0:00:10.496 ********** 2026-04-17 05:01:29.575089 | orchestrator | 2026-04-17 05:01:29.575099 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 05:01:29.575110 | orchestrator | Friday 17 April 2026 05:01:22 +0000 (0:00:00.075) 0:00:10.572 ********** 2026-04-17 05:01:29.575121 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.575131 | orchestrator | 2026-04-17 05:01:29.575142 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-17 05:01:29.575153 | orchestrator | Friday 17 April 2026 05:01:22 +0000 (0:00:00.272) 0:00:10.844 ********** 2026-04-17 05:01:29.575163 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.575174 | orchestrator | 2026-04-17 05:01:29.575185 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 05:01:29.575196 | orchestrator | Friday 17 April 2026 05:01:23 +0000 (0:00:00.266) 0:00:11.110 ********** 2026-04-17 05:01:29.575206 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.575217 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:29.575228 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:29.575239 | orchestrator | 2026-04-17 05:01:29.575249 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-17 05:01:29.575260 | orchestrator | Friday 17 April 2026 05:01:23 +0000 (0:00:00.326) 0:00:11.437 ********** 2026-04-17 05:01:29.575271 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.575281 | orchestrator | 2026-04-17 05:01:29.575292 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-17 05:01:29.575303 | orchestrator | Friday 17 April 2026 05:01:24 +0000 (0:00:00.790) 0:00:12.227 ********** 2026-04-17 05:01:29.575314 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 05:01:29.575324 | orchestrator | 2026-04-17 05:01:29.575335 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-17 05:01:29.575345 | orchestrator | Friday 17 April 2026 05:01:25 +0000 (0:00:01.596) 0:00:13.824 ********** 2026-04-17 05:01:29.575356 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.575367 | orchestrator | 2026-04-17 05:01:29.575377 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-17 05:01:29.575388 | orchestrator | Friday 17 April 2026 05:01:26 +0000 (0:00:00.154) 0:00:13.978 ********** 2026-04-17 05:01:29.575398 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.575409 | orchestrator | 2026-04-17 05:01:29.575420 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-17 05:01:29.575431 | orchestrator | Friday 17 April 2026 05:01:26 +0000 (0:00:00.350) 0:00:14.329 ********** 2026-04-17 05:01:29.575441 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:29.575452 | orchestrator | 2026-04-17 05:01:29.575463 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-17 05:01:29.575474 | orchestrator | Friday 17 April 2026 05:01:26 +0000 (0:00:00.131) 0:00:14.461 ********** 2026-04-17 05:01:29.575484 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.575495 | orchestrator | 2026-04-17 05:01:29.575506 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 05:01:29.575517 | orchestrator | Friday 17 April 2026 05:01:26 +0000 (0:00:00.138) 0:00:14.599 ********** 2026-04-17 05:01:29.575527 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:29.575545 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:29.575556 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:29.575567 | orchestrator | 2026-04-17 05:01:29.575578 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-17 05:01:29.575588 | orchestrator | Friday 17 April 2026 05:01:26 +0000 (0:00:00.318) 0:00:14.917 ********** 2026-04-17 05:01:29.575599 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:01:29.575610 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:01:29.575620 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:01:40.885034 | orchestrator | 2026-04-17 05:01:40.885245 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-17 05:01:40.885264 | orchestrator | Friday 17 April 2026 05:01:29 +0000 (0:00:02.570) 0:00:17.488 ********** 2026-04-17 05:01:40.885275 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:40.885286 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:40.885295 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:40.885305 | orchestrator | 2026-04-17 05:01:40.885315 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-17 05:01:40.885325 | orchestrator | Friday 17 April 2026 05:01:29 +0000 (0:00:00.341) 0:00:17.829 ********** 2026-04-17 05:01:40.885334 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:40.885344 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:40.885354 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:40.885363 | orchestrator | 2026-04-17 05:01:40.885373 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-17 05:01:40.885382 | orchestrator | Friday 17 April 2026 05:01:30 +0000 (0:00:00.664) 0:00:18.493 ********** 2026-04-17 05:01:40.885392 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:40.885403 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:40.885412 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:40.885422 | orchestrator | 2026-04-17 05:01:40.885431 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-17 05:01:40.885441 | orchestrator | Friday 17 April 2026 05:01:30 +0000 (0:00:00.361) 0:00:18.855 ********** 2026-04-17 05:01:40.885451 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:40.885460 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:40.885470 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:40.885479 | orchestrator | 2026-04-17 05:01:40.885489 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-17 05:01:40.885498 | orchestrator | Friday 17 April 2026 05:01:31 +0000 (0:00:00.605) 0:00:19.461 ********** 2026-04-17 05:01:40.885508 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:40.885518 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:40.885545 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:40.885555 | orchestrator | 2026-04-17 05:01:40.885565 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-17 05:01:40.885575 | orchestrator | Friday 17 April 2026 05:01:31 +0000 (0:00:00.373) 0:00:19.834 ********** 2026-04-17 05:01:40.885587 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:40.885597 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:40.885609 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:40.885620 | orchestrator | 2026-04-17 05:01:40.885632 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 05:01:40.885643 | orchestrator | Friday 17 April 2026 05:01:32 +0000 (0:00:00.368) 0:00:20.202 ********** 2026-04-17 05:01:40.885654 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:40.885665 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:40.885676 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:40.885687 | orchestrator | 2026-04-17 05:01:40.885698 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-17 05:01:40.885709 | orchestrator | Friday 17 April 2026 05:01:32 +0000 (0:00:00.582) 0:00:20.785 ********** 2026-04-17 05:01:40.885721 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:40.885732 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:40.885742 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:40.885774 | orchestrator | 2026-04-17 05:01:40.885785 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-17 05:01:40.885796 | orchestrator | Friday 17 April 2026 05:01:33 +0000 (0:00:00.854) 0:00:21.639 ********** 2026-04-17 05:01:40.885807 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:40.885818 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:40.885829 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:40.885840 | orchestrator | 2026-04-17 05:01:40.885852 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-17 05:01:40.885863 | orchestrator | Friday 17 April 2026 05:01:34 +0000 (0:00:00.336) 0:00:21.976 ********** 2026-04-17 05:01:40.885874 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:40.885885 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:01:40.885896 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:01:40.885907 | orchestrator | 2026-04-17 05:01:40.885919 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-17 05:01:40.885931 | orchestrator | Friday 17 April 2026 05:01:34 +0000 (0:00:00.303) 0:00:22.279 ********** 2026-04-17 05:01:40.885942 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:01:40.885951 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:01:40.885961 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:01:40.885970 | orchestrator | 2026-04-17 05:01:40.885980 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 05:01:40.885990 | orchestrator | Friday 17 April 2026 05:01:34 +0000 (0:00:00.555) 0:00:22.835 ********** 2026-04-17 05:01:40.885999 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 05:01:40.886009 | orchestrator | 2026-04-17 05:01:40.886100 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 05:01:40.886112 | orchestrator | Friday 17 April 2026 05:01:35 +0000 (0:00:00.288) 0:00:23.124 ********** 2026-04-17 05:01:40.886122 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:01:40.886132 | orchestrator | 2026-04-17 05:01:40.886141 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 05:01:40.886161 | orchestrator | Friday 17 April 2026 05:01:35 +0000 (0:00:00.313) 0:00:23.437 ********** 2026-04-17 05:01:40.886170 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 05:01:40.886180 | orchestrator | 2026-04-17 05:01:40.886190 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 05:01:40.886199 | orchestrator | Friday 17 April 2026 05:01:37 +0000 (0:00:01.760) 0:00:25.198 ********** 2026-04-17 05:01:40.886209 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 05:01:40.886218 | orchestrator | 2026-04-17 05:01:40.886228 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 05:01:40.886238 | orchestrator | Friday 17 April 2026 05:01:37 +0000 (0:00:00.277) 0:00:25.476 ********** 2026-04-17 05:01:40.886247 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 05:01:40.886257 | orchestrator | 2026-04-17 05:01:40.886285 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:01:40.886295 | orchestrator | Friday 17 April 2026 05:01:37 +0000 (0:00:00.318) 0:00:25.794 ********** 2026-04-17 05:01:40.886304 | orchestrator | 2026-04-17 05:01:40.886314 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:01:40.886323 | orchestrator | Friday 17 April 2026 05:01:37 +0000 (0:00:00.087) 0:00:25.881 ********** 2026-04-17 05:01:40.886333 | orchestrator | 2026-04-17 05:01:40.886343 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 05:01:40.886352 | orchestrator | Friday 17 April 2026 05:01:38 +0000 (0:00:00.075) 0:00:25.957 ********** 2026-04-17 05:01:40.886361 | orchestrator | 2026-04-17 05:01:40.886371 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 05:01:40.886380 | orchestrator | Friday 17 April 2026 05:01:38 +0000 (0:00:00.084) 0:00:26.042 ********** 2026-04-17 05:01:40.886390 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 05:01:40.886408 | orchestrator | 2026-04-17 05:01:40.886418 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 05:01:40.886427 | orchestrator | Friday 17 April 2026 05:01:39 +0000 (0:00:01.724) 0:00:27.766 ********** 2026-04-17 05:01:40.886437 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-17 05:01:40.886447 | orchestrator |  "msg": [ 2026-04-17 05:01:40.886457 | orchestrator |  "Validator run completed.", 2026-04-17 05:01:40.886472 | orchestrator |  "You can find the report file here:", 2026-04-17 05:01:40.886482 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-17T05:01:13+00:00-report.json", 2026-04-17 05:01:40.886492 | orchestrator |  "on the following host:", 2026-04-17 05:01:40.886508 | orchestrator |  "testbed-manager" 2026-04-17 05:01:40.886524 | orchestrator |  ] 2026-04-17 05:01:40.886541 | orchestrator | } 2026-04-17 05:01:40.886557 | orchestrator | 2026-04-17 05:01:40.886573 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:01:40.886591 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 05:01:40.886606 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 05:01:40.886623 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 05:01:40.886639 | orchestrator | 2026-04-17 05:01:40.886655 | orchestrator | 2026-04-17 05:01:40.886671 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:01:40.886688 | orchestrator | Friday 17 April 2026 05:01:40 +0000 (0:00:00.646) 0:00:28.412 ********** 2026-04-17 05:01:40.886705 | orchestrator | =============================================================================== 2026-04-17 05:01:40.886722 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.57s 2026-04-17 05:01:40.886739 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-04-17 05:01:40.886749 | orchestrator | Write report file ------------------------------------------------------- 1.72s 2026-04-17 05:01:40.886759 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.60s 2026-04-17 05:01:40.886769 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2026-04-17 05:01:40.886778 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.89s 2026-04-17 05:01:40.886787 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.85s 2026-04-17 05:01:40.886797 | orchestrator | Aggregate test results step one ----------------------------------------- 0.81s 2026-04-17 05:01:40.886806 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.79s 2026-04-17 05:01:40.886816 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2026-04-17 05:01:40.886825 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.66s 2026-04-17 05:01:40.886834 | orchestrator | Print report file information ------------------------------------------- 0.65s 2026-04-17 05:01:40.886844 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.61s 2026-04-17 05:01:40.886853 | orchestrator | Prepare test data ------------------------------------------------------- 0.58s 2026-04-17 05:01:40.886863 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.58s 2026-04-17 05:01:40.886872 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2026-04-17 05:01:40.886933 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.56s 2026-04-17 05:01:40.886944 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.56s 2026-04-17 05:01:40.886953 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.56s 2026-04-17 05:01:40.886972 | orchestrator | Fail if count of unencrypted OSDs does not match ------------------------ 0.37s 2026-04-17 05:01:41.286125 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-17 05:01:41.294538 | orchestrator | + set -e 2026-04-17 05:01:41.294620 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 05:01:41.294641 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 05:01:41.294660 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 05:01:41.294679 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 05:01:41.294696 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 05:01:41.294715 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 05:01:41.294753 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 05:01:41.294771 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 05:01:41.294788 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 05:01:41.294806 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 05:01:41.294823 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 05:01:41.294839 | orchestrator | ++ export ARA=false 2026-04-17 05:01:41.294857 | orchestrator | ++ ARA=false 2026-04-17 05:01:41.294876 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 05:01:41.294894 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 05:01:41.294913 | orchestrator | ++ export TEMPEST=false 2026-04-17 05:01:41.294931 | orchestrator | ++ TEMPEST=false 2026-04-17 05:01:41.294949 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 05:01:41.294968 | orchestrator | ++ IS_ZUUL=true 2026-04-17 05:01:41.294986 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 05:01:41.295005 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 05:01:41.295024 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 05:01:41.295041 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 05:01:41.295060 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 05:01:41.295101 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 05:01:41.295118 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 05:01:41.295136 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 05:01:41.295154 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 05:01:41.295174 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 05:01:41.295193 | orchestrator | + source /etc/os-release 2026-04-17 05:01:41.295212 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-17 05:01:41.295232 | orchestrator | ++ NAME=Ubuntu 2026-04-17 05:01:41.295251 | orchestrator | ++ VERSION_ID=24.04 2026-04-17 05:01:41.295270 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-17 05:01:41.295289 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-17 05:01:41.295308 | orchestrator | ++ ID=ubuntu 2026-04-17 05:01:41.295326 | orchestrator | ++ ID_LIKE=debian 2026-04-17 05:01:41.295345 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-17 05:01:41.295363 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-17 05:01:41.295383 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-17 05:01:41.295402 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-17 05:01:41.295422 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-17 05:01:41.295441 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-17 05:01:41.295460 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-17 05:01:41.295500 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-17 05:01:41.295521 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-17 05:01:41.317999 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-17 05:02:05.564819 | orchestrator | 2026-04-17 05:02:05.564968 | orchestrator | # Status of Elasticsearch 2026-04-17 05:02:05.565000 | orchestrator | 2026-04-17 05:02:05.565019 | orchestrator | + pushd /opt/configuration/contrib 2026-04-17 05:02:05.565039 | orchestrator | + echo 2026-04-17 05:02:05.565058 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-17 05:02:05.565077 | orchestrator | + echo 2026-04-17 05:02:05.565095 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-17 05:02:05.735603 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-17 05:02:05.735705 | orchestrator | 2026-04-17 05:02:05.735725 | orchestrator | # Status of MariaDB 2026-04-17 05:02:05.735766 | orchestrator | 2026-04-17 05:02:05.735774 | orchestrator | + echo 2026-04-17 05:02:05.735782 | orchestrator | + echo '# Status of MariaDB' 2026-04-17 05:02:05.735789 | orchestrator | + echo 2026-04-17 05:02:05.736345 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-17 05:02:05.799825 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 05:02:05.799923 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-17 05:02:05.799939 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-17 05:02:05.799952 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-17 05:02:05.868973 | orchestrator | Reading package lists... 2026-04-17 05:02:06.230578 | orchestrator | Building dependency tree... 2026-04-17 05:02:06.230980 | orchestrator | Reading state information... 2026-04-17 05:02:06.605594 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-17 05:02:06.605695 | orchestrator | bc set to manually installed. 2026-04-17 05:02:06.605713 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2026-04-17 05:02:07.290544 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-17 05:02:07.291086 | orchestrator | 2026-04-17 05:02:07.291409 | orchestrator | # Status of Prometheus 2026-04-17 05:02:07.291459 | orchestrator | 2026-04-17 05:02:07.291479 | orchestrator | + echo 2026-04-17 05:02:07.291497 | orchestrator | + echo '# Status of Prometheus' 2026-04-17 05:02:07.291514 | orchestrator | + echo 2026-04-17 05:02:07.291532 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-17 05:02:07.354196 | orchestrator | Unauthorized 2026-04-17 05:02:07.357535 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-17 05:02:07.425995 | orchestrator | Unauthorized 2026-04-17 05:02:07.429499 | orchestrator | 2026-04-17 05:02:07.429534 | orchestrator | # Status of RabbitMQ 2026-04-17 05:02:07.429547 | orchestrator | 2026-04-17 05:02:07.429559 | orchestrator | + echo 2026-04-17 05:02:07.429570 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-17 05:02:07.429582 | orchestrator | + echo 2026-04-17 05:02:07.430878 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-17 05:02:07.496521 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 05:02:07.496584 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-17 05:02:07.496593 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-17 05:02:07.930374 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-17 05:02:07.939650 | orchestrator | 2026-04-17 05:02:07.939683 | orchestrator | # Status of Redis 2026-04-17 05:02:07.939691 | orchestrator | 2026-04-17 05:02:07.939698 | orchestrator | + echo 2026-04-17 05:02:07.939705 | orchestrator | + echo '# Status of Redis' 2026-04-17 05:02:07.939712 | orchestrator | + echo 2026-04-17 05:02:07.939720 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-17 05:02:07.944441 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001366s;;;0.000000;10.000000 2026-04-17 05:02:07.944578 | orchestrator | + popd 2026-04-17 05:02:07.944913 | orchestrator | 2026-04-17 05:02:07.944931 | orchestrator | # Create backup of MariaDB database 2026-04-17 05:02:07.944941 | orchestrator | 2026-04-17 05:02:07.944951 | orchestrator | + echo 2026-04-17 05:02:07.944961 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-17 05:02:07.944972 | orchestrator | + echo 2026-04-17 05:02:07.944981 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-17 05:02:10.066434 | orchestrator | 2026-04-17 05:02:10 | INFO  | Task 746d3e78-5881-489c-88dd-8d474bf34221 (mariadb_backup) was prepared for execution. 2026-04-17 05:02:10.066534 | orchestrator | 2026-04-17 05:02:10 | INFO  | It takes a moment until task 746d3e78-5881-489c-88dd-8d474bf34221 (mariadb_backup) has been started and output is visible here. 2026-04-17 05:03:49.347716 | orchestrator | 2026-04-17 05:03:49.347836 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:03:49.347853 | orchestrator | 2026-04-17 05:03:49.347866 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:03:49.347878 | orchestrator | Friday 17 April 2026 05:02:14 +0000 (0:00:00.175) 0:00:00.175 ********** 2026-04-17 05:03:49.347889 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:03:49.347901 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:03:49.347912 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:03:49.347946 | orchestrator | 2026-04-17 05:03:49.347958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:03:49.347969 | orchestrator | Friday 17 April 2026 05:02:14 +0000 (0:00:00.341) 0:00:00.517 ********** 2026-04-17 05:03:49.347980 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-17 05:03:49.348000 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-17 05:03:49.348019 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-17 05:03:49.348038 | orchestrator | 2026-04-17 05:03:49.348057 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-17 05:03:49.348079 | orchestrator | 2026-04-17 05:03:49.348098 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-17 05:03:49.348116 | orchestrator | Friday 17 April 2026 05:02:15 +0000 (0:00:00.646) 0:00:01.164 ********** 2026-04-17 05:03:49.348128 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:03:49.348139 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 05:03:49.348150 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 05:03:49.348161 | orchestrator | 2026-04-17 05:03:49.348171 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 05:03:49.348182 | orchestrator | Friday 17 April 2026 05:02:15 +0000 (0:00:00.426) 0:00:01.591 ********** 2026-04-17 05:03:49.348209 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:03:49.348222 | orchestrator | 2026-04-17 05:03:49.348235 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-17 05:03:49.348249 | orchestrator | Friday 17 April 2026 05:02:16 +0000 (0:00:00.591) 0:00:02.183 ********** 2026-04-17 05:03:49.348262 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:03:49.348274 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:03:49.348287 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:03:49.348300 | orchestrator | 2026-04-17 05:03:49.348337 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-17 05:03:49.348351 | orchestrator | Friday 17 April 2026 05:02:19 +0000 (0:00:03.317) 0:00:05.501 ********** 2026-04-17 05:03:49.348364 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-17 05:03:49.348377 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-17 05:03:49.348390 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-17 05:03:49.348402 | orchestrator | mariadb_bootstrap_restart 2026-04-17 05:03:49.348416 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:03:49.348428 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:03:49.348440 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:03:49.348453 | orchestrator | 2026-04-17 05:03:49.348466 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-17 05:03:49.348479 | orchestrator | skipping: no hosts matched 2026-04-17 05:03:49.348492 | orchestrator | 2026-04-17 05:03:49.348504 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-17 05:03:49.348516 | orchestrator | skipping: no hosts matched 2026-04-17 05:03:49.348529 | orchestrator | 2026-04-17 05:03:49.348541 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-17 05:03:49.348554 | orchestrator | skipping: no hosts matched 2026-04-17 05:03:49.348567 | orchestrator | 2026-04-17 05:03:49.348580 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-17 05:03:49.348593 | orchestrator | 2026-04-17 05:03:49.348606 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-17 05:03:49.348618 | orchestrator | Friday 17 April 2026 05:03:48 +0000 (0:01:28.376) 0:01:33.878 ********** 2026-04-17 05:03:49.348641 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:03:49.348652 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:03:49.348673 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:03:49.348694 | orchestrator | 2026-04-17 05:03:49.348705 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-17 05:03:49.348715 | orchestrator | Friday 17 April 2026 05:03:48 +0000 (0:00:00.315) 0:01:34.193 ********** 2026-04-17 05:03:49.348726 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:03:49.348737 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:03:49.348747 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:03:49.348758 | orchestrator | 2026-04-17 05:03:49.348769 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:03:49.348781 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:03:49.348793 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 05:03:49.348804 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 05:03:49.348815 | orchestrator | 2026-04-17 05:03:49.348825 | orchestrator | 2026-04-17 05:03:49.348836 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:03:49.348847 | orchestrator | Friday 17 April 2026 05:03:48 +0000 (0:00:00.453) 0:01:34.647 ********** 2026-04-17 05:03:49.348858 | orchestrator | =============================================================================== 2026-04-17 05:03:49.348869 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 88.38s 2026-04-17 05:03:49.348898 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.32s 2026-04-17 05:03:49.348909 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-04-17 05:03:49.348920 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.59s 2026-04-17 05:03:49.348931 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.45s 2026-04-17 05:03:49.348942 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2026-04-17 05:03:49.348953 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-17 05:03:49.348963 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-04-17 05:03:49.723833 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-17 05:03:49.733059 | orchestrator | + set -e 2026-04-17 05:03:49.733117 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 05:03:49.734350 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 05:03:49.734380 | orchestrator | ++ INTERACTIVE=false 2026-04-17 05:03:49.734391 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 05:03:49.734404 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 05:03:49.734426 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 05:03:49.736152 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 05:03:49.739977 | orchestrator | 2026-04-17 05:03:49.740035 | orchestrator | # OpenStack endpoints 2026-04-17 05:03:49.740050 | orchestrator | 2026-04-17 05:03:49.740062 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 05:03:49.740074 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 05:03:49.740085 | orchestrator | + export OS_CLOUD=admin 2026-04-17 05:03:49.740096 | orchestrator | + OS_CLOUD=admin 2026-04-17 05:03:49.740107 | orchestrator | + echo 2026-04-17 05:03:49.740118 | orchestrator | + echo '# OpenStack endpoints' 2026-04-17 05:03:49.740129 | orchestrator | + echo 2026-04-17 05:03:49.740140 | orchestrator | + openstack endpoint list 2026-04-17 05:03:52.921571 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 05:03:52.921672 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-17 05:03:52.921688 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 05:03:52.921723 | orchestrator | | 1583874a3a854f88ae2dd7661342cd56 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-17 05:03:52.921735 | orchestrator | | 17bb988afc5f4af2ab9226b869dd2c14 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-17 05:03:52.921746 | orchestrator | | 1a54bcbd716547209ee14dd4544e7bc2 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-17 05:03:52.921756 | orchestrator | | 21e6acc2cfbe492780c7b7898760048b | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-17 05:03:52.921767 | orchestrator | | 2ca49976c6774efdad26fd92aec8c4f6 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-17 05:03:52.921778 | orchestrator | | 2e074997e0d44eaca5fbfa8640f76cdc | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-17 05:03:52.921789 | orchestrator | | 2facb37b063a4bdea688c9a6b189c931 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-17 05:03:52.921800 | orchestrator | | 3e1b87a3a4ae4eec96c84a31e231ff23 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-17 05:03:52.921811 | orchestrator | | 406691b8452e499ca61f11a0a0ac6a00 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-17 05:03:52.921821 | orchestrator | | 4641d5511fc04a14bc35c7ca3f41a9d8 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-17 05:03:52.921850 | orchestrator | | 4d01b9be3cdc408c867a33403d9d1fca | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-17 05:03:52.921862 | orchestrator | | 5c7335f6ccc14893b03c6693a1e2c496 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-17 05:03:52.921873 | orchestrator | | 61470b1094334e55a70af8528263109b | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-17 05:03:52.921884 | orchestrator | | 63fde595c8584042a01b658fef870ca6 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-17 05:03:52.921895 | orchestrator | | 67e90a3ab684402d89a71eef7bbf0eb2 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-17 05:03:52.921906 | orchestrator | | 96f121b9132140fb9ec7aa0107e961bd | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-17 05:03:52.921916 | orchestrator | | 97f3948d0e3f4194b0d102d45b13fe5f | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-17 05:03:52.921927 | orchestrator | | a4c83bcdd9a04f2582000e99a6747a8d | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-17 05:03:52.921938 | orchestrator | | ae6c659dea3b4bf885cc366671eb521e | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-17 05:03:52.921949 | orchestrator | | b047ab8f78e542128272a3da56722374 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-17 05:03:52.921984 | orchestrator | | b9825b5a0fcc47f599d0a336ce54069d | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-17 05:03:52.922002 | orchestrator | | d28b4d4ff2e04ba1a773bb4ed3405bc6 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-17 05:03:52.922013 | orchestrator | | d587aee4ad3249878471ee0bf4d95100 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-17 05:03:52.922089 | orchestrator | | d95477de5e334bf7896cc2636ba80625 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-17 05:03:52.922134 | orchestrator | | dc7c8612fb3a4699ad796aeacab78c55 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-17 05:03:52.922148 | orchestrator | | eb39c18aaff24e64b769828479bffa00 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-17 05:03:52.922161 | orchestrator | | f520a2f51f5b478c87c5ac9d924a9408 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-17 05:03:52.922174 | orchestrator | | f71f348ee24f4754b324e54f9c6af3c4 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-17 05:03:52.922187 | orchestrator | | f7b2619542e44d6f8dc9924676ce5505 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-17 05:03:52.922199 | orchestrator | | ff4a455cdf41437cbf83b5d79899d623 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-17 05:03:52.922212 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 05:03:53.242882 | orchestrator | 2026-04-17 05:03:53.242962 | orchestrator | # Cinder 2026-04-17 05:03:53.242971 | orchestrator | 2026-04-17 05:03:53.242977 | orchestrator | + echo 2026-04-17 05:03:53.242984 | orchestrator | + echo '# Cinder' 2026-04-17 05:03:53.242990 | orchestrator | + echo 2026-04-17 05:03:53.242997 | orchestrator | + openstack volume service list 2026-04-17 05:03:55.885303 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 05:03:55.885477 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-17 05:03:55.885495 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 05:03:55.885512 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-17T05:03:50.000000 | 2026-04-17 05:03:55.885531 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-17T05:03:50.000000 | 2026-04-17 05:03:55.885550 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-17T05:03:49.000000 | 2026-04-17 05:03:55.885568 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-17T05:03:49.000000 | 2026-04-17 05:03:55.885586 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-17T05:03:54.000000 | 2026-04-17 05:03:55.885605 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-17T05:03:55.000000 | 2026-04-17 05:03:55.885623 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-17T05:03:51.000000 | 2026-04-17 05:03:55.885643 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-17T05:03:53.000000 | 2026-04-17 05:03:55.885662 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-17T05:03:53.000000 | 2026-04-17 05:03:55.885711 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 05:03:56.200464 | orchestrator | 2026-04-17 05:03:56.200557 | orchestrator | # Neutron 2026-04-17 05:03:56.200572 | orchestrator | 2026-04-17 05:03:56.200584 | orchestrator | + echo 2026-04-17 05:03:56.200596 | orchestrator | + echo '# Neutron' 2026-04-17 05:03:56.200608 | orchestrator | + echo 2026-04-17 05:03:56.200620 | orchestrator | + openstack network agent list 2026-04-17 05:03:58.836015 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 05:03:58.836117 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-17 05:03:58.836132 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 05:03:58.836144 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-17 05:03:58.836155 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-17 05:03:58.836166 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-17 05:03:58.836196 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-17 05:03:58.836208 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-17 05:03:58.836219 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-17 05:03:58.836229 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 05:03:58.836240 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 05:03:58.836251 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 05:03:58.836262 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 05:03:59.161768 | orchestrator | + openstack network service provider list 2026-04-17 05:04:01.815224 | orchestrator | +---------------+------+---------+ 2026-04-17 05:04:01.815331 | orchestrator | | Service Type | Name | Default | 2026-04-17 05:04:01.815398 | orchestrator | +---------------+------+---------+ 2026-04-17 05:04:01.815410 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-17 05:04:01.815421 | orchestrator | +---------------+------+---------+ 2026-04-17 05:04:02.192808 | orchestrator | 2026-04-17 05:04:02.192870 | orchestrator | # Nova 2026-04-17 05:04:02.192876 | orchestrator | 2026-04-17 05:04:02.192880 | orchestrator | + echo 2026-04-17 05:04:02.192884 | orchestrator | + echo '# Nova' 2026-04-17 05:04:02.192889 | orchestrator | + echo 2026-04-17 05:04:02.192893 | orchestrator | + openstack compute service list 2026-04-17 05:04:04.836553 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 05:04:04.836675 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-17 05:04:04.836693 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 05:04:04.836705 | orchestrator | | bd33767f-2e7f-4a33-a23d-c7fa5a79ff1c | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-17T05:03:57.000000 | 2026-04-17 05:04:04.836740 | orchestrator | | aa03edaa-dd3f-4638-a4af-92ca70e27997 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-17T05:04:00.000000 | 2026-04-17 05:04:04.836752 | orchestrator | | 833a183d-d5da-4d1b-a421-5036be2ccc37 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-17T05:04:02.000000 | 2026-04-17 05:04:04.836765 | orchestrator | | 2fd01797-ef71-41f7-8242-1d73336183c1 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-17T05:03:56.000000 | 2026-04-17 05:04:04.836776 | orchestrator | | 1c17a92b-5501-4b06-89c0-b66434d590b5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-17T05:03:57.000000 | 2026-04-17 05:04:04.836788 | orchestrator | | 97b54230-d60a-4517-b42e-7293226832a8 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-17T05:03:58.000000 | 2026-04-17 05:04:04.836799 | orchestrator | | 6c11e930-75d2-4523-9663-e8e7cf357df9 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-17T05:03:55.000000 | 2026-04-17 05:04:04.836810 | orchestrator | | 17676f97-c4fe-4882-9d5e-d44a0c486786 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-17T05:03:56.000000 | 2026-04-17 05:04:04.836821 | orchestrator | | a7138f0f-cddc-486a-ac64-e457b9b04bbe | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-17T05:03:56.000000 | 2026-04-17 05:04:04.836833 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 05:04:05.149661 | orchestrator | + openstack hypervisor list 2026-04-17 05:04:08.568153 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 05:04:08.568257 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-17 05:04:08.568272 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 05:04:08.568283 | orchestrator | | 42750323-5f87-49a8-81e3-c06816c96743 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-17 05:04:08.568295 | orchestrator | | 9c34c48a-d7a0-4cfe-9b8a-4ec5b04163f3 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-17 05:04:08.568305 | orchestrator | | 06c80643-ac2f-4c9c-819b-81d84b49467f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-17 05:04:08.568316 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 05:04:08.914744 | orchestrator | 2026-04-17 05:04:08.914844 | orchestrator | # Run OpenStack test play 2026-04-17 05:04:08.914860 | orchestrator | 2026-04-17 05:04:08.914872 | orchestrator | + echo 2026-04-17 05:04:08.914885 | orchestrator | + echo '# Run OpenStack test play' 2026-04-17 05:04:08.914901 | orchestrator | + echo 2026-04-17 05:04:08.914913 | orchestrator | + osism apply --environment openstack test 2026-04-17 05:04:11.064195 | orchestrator | 2026-04-17 05:04:11 | INFO  | Trying to run play test in environment openstack 2026-04-17 05:04:21.169431 | orchestrator | 2026-04-17 05:04:21 | INFO  | Task 7b95ffd3-0a58-4403-a650-3a75fb4103f1 (test) was prepared for execution. 2026-04-17 05:04:21.169552 | orchestrator | 2026-04-17 05:04:21 | INFO  | It takes a moment until task 7b95ffd3-0a58-4403-a650-3a75fb4103f1 (test) has been started and output is visible here. 2026-04-17 05:07:25.674273 | orchestrator | 2026-04-17 05:07:25.674390 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-17 05:07:25.674407 | orchestrator | 2026-04-17 05:07:25.674419 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-17 05:07:25.674431 | orchestrator | Friday 17 April 2026 05:04:25 +0000 (0:00:00.073) 0:00:00.073 ********** 2026-04-17 05:07:25.674442 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674454 | orchestrator | 2026-04-17 05:07:25.674465 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-17 05:07:25.674476 | orchestrator | Friday 17 April 2026 05:04:29 +0000 (0:00:03.816) 0:00:03.890 ********** 2026-04-17 05:07:25.674510 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674521 | orchestrator | 2026-04-17 05:07:25.674532 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-17 05:07:25.674543 | orchestrator | Friday 17 April 2026 05:04:33 +0000 (0:00:04.478) 0:00:08.369 ********** 2026-04-17 05:07:25.674554 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674565 | orchestrator | 2026-04-17 05:07:25.674576 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-17 05:07:25.674587 | orchestrator | Friday 17 April 2026 05:04:40 +0000 (0:00:06.963) 0:00:15.332 ********** 2026-04-17 05:07:25.674598 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674608 | orchestrator | 2026-04-17 05:07:25.674619 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-17 05:07:25.674630 | orchestrator | Friday 17 April 2026 05:04:45 +0000 (0:00:04.069) 0:00:19.401 ********** 2026-04-17 05:07:25.674641 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674652 | orchestrator | 2026-04-17 05:07:25.674663 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-17 05:07:25.674673 | orchestrator | Friday 17 April 2026 05:04:49 +0000 (0:00:04.258) 0:00:23.660 ********** 2026-04-17 05:07:25.674684 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-17 05:07:25.674696 | orchestrator | changed: [localhost] => (item=member) 2026-04-17 05:07:25.674751 | orchestrator | changed: [localhost] => (item=creator) 2026-04-17 05:07:25.674765 | orchestrator | 2026-04-17 05:07:25.674776 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-17 05:07:25.674787 | orchestrator | Friday 17 April 2026 05:05:01 +0000 (0:00:11.866) 0:00:35.527 ********** 2026-04-17 05:07:25.674798 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674809 | orchestrator | 2026-04-17 05:07:25.674822 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-17 05:07:25.674835 | orchestrator | Friday 17 April 2026 05:05:05 +0000 (0:00:04.307) 0:00:39.834 ********** 2026-04-17 05:07:25.674847 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674860 | orchestrator | 2026-04-17 05:07:25.674873 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-17 05:07:25.674886 | orchestrator | Friday 17 April 2026 05:05:10 +0000 (0:00:04.780) 0:00:44.615 ********** 2026-04-17 05:07:25.674898 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674911 | orchestrator | 2026-04-17 05:07:25.674923 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-17 05:07:25.674936 | orchestrator | Friday 17 April 2026 05:05:14 +0000 (0:00:04.411) 0:00:49.026 ********** 2026-04-17 05:07:25.674950 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.674962 | orchestrator | 2026-04-17 05:07:25.674975 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-17 05:07:25.674988 | orchestrator | Friday 17 April 2026 05:05:18 +0000 (0:00:03.839) 0:00:52.866 ********** 2026-04-17 05:07:25.675000 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.675013 | orchestrator | 2026-04-17 05:07:25.675026 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-17 05:07:25.675038 | orchestrator | Friday 17 April 2026 05:05:22 +0000 (0:00:04.091) 0:00:56.958 ********** 2026-04-17 05:07:25.675051 | orchestrator | changed: [localhost] 2026-04-17 05:07:25.675064 | orchestrator | 2026-04-17 05:07:25.675076 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-17 05:07:25.675089 | orchestrator | Friday 17 April 2026 05:05:26 +0000 (0:00:04.019) 0:01:00.978 ********** 2026-04-17 05:07:25.675101 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-17 05:07:25.675114 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-17 05:07:25.675126 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-17 05:07:25.675140 | orchestrator | 2026-04-17 05:07:25.675153 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-17 05:07:25.675167 | orchestrator | Friday 17 April 2026 05:05:40 +0000 (0:00:13.978) 0:01:14.957 ********** 2026-04-17 05:07:25.675187 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-17 05:07:25.675198 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-17 05:07:25.675209 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-17 05:07:25.675220 | orchestrator | 2026-04-17 05:07:25.675231 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-17 05:07:25.675242 | orchestrator | Friday 17 April 2026 05:05:56 +0000 (0:00:15.501) 0:01:30.459 ********** 2026-04-17 05:07:25.675253 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-17 05:07:25.675263 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-17 05:07:25.675289 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-17 05:07:25.675300 | orchestrator | 2026-04-17 05:07:25.675311 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-17 05:07:25.675322 | orchestrator | 2026-04-17 05:07:25.675333 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-17 05:07:25.675361 | orchestrator | Friday 17 April 2026 05:06:24 +0000 (0:00:28.446) 0:01:58.906 ********** 2026-04-17 05:07:25.675373 | orchestrator | ok: [localhost] 2026-04-17 05:07:25.675385 | orchestrator | 2026-04-17 05:07:25.675396 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-17 05:07:25.675407 | orchestrator | Friday 17 April 2026 05:06:28 +0000 (0:00:03.698) 0:02:02.604 ********** 2026-04-17 05:07:25.675418 | orchestrator | skipping: [localhost] 2026-04-17 05:07:25.675429 | orchestrator | 2026-04-17 05:07:25.675440 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-17 05:07:25.675450 | orchestrator | Friday 17 April 2026 05:06:28 +0000 (0:00:00.060) 0:02:02.665 ********** 2026-04-17 05:07:25.675461 | orchestrator | skipping: [localhost] 2026-04-17 05:07:25.675472 | orchestrator | 2026-04-17 05:07:25.675483 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-17 05:07:25.675493 | orchestrator | Friday 17 April 2026 05:06:28 +0000 (0:00:00.057) 0:02:02.723 ********** 2026-04-17 05:07:25.675504 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-17 05:07:25.675515 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-17 05:07:25.675527 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-17 05:07:25.675545 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-17 05:07:25.675562 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-17 05:07:25.675574 | orchestrator | skipping: [localhost] 2026-04-17 05:07:25.675585 | orchestrator | 2026-04-17 05:07:25.675596 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-17 05:07:25.675606 | orchestrator | Friday 17 April 2026 05:06:28 +0000 (0:00:00.171) 0:02:02.894 ********** 2026-04-17 05:07:25.675617 | orchestrator | skipping: [localhost] 2026-04-17 05:07:25.675628 | orchestrator | 2026-04-17 05:07:25.675639 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-17 05:07:25.675650 | orchestrator | Friday 17 April 2026 05:06:28 +0000 (0:00:00.148) 0:02:03.042 ********** 2026-04-17 05:07:25.675661 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 05:07:25.675672 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 05:07:25.675683 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 05:07:25.675693 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 05:07:25.675704 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 05:07:25.675765 | orchestrator | 2026-04-17 05:07:25.675776 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-17 05:07:25.675787 | orchestrator | Friday 17 April 2026 05:06:33 +0000 (0:00:05.221) 0:02:08.264 ********** 2026-04-17 05:07:25.675798 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-17 05:07:25.675810 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-17 05:07:25.675821 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-17 05:07:25.675832 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-17 05:07:25.675846 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j195511883077.3695', 'results_file': '/ansible/.ansible_async/j195511883077.3695', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:07:25.675860 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j392491957442.3720', 'results_file': '/ansible/.ansible_async/j392491957442.3720', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:07:25.675872 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j285404050413.3745', 'results_file': '/ansible/.ansible_async/j285404050413.3745', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:07:25.675883 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j318864905053.3770', 'results_file': '/ansible/.ansible_async/j318864905053.3770', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:07:25.675894 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j103238451074.3795', 'results_file': '/ansible/.ansible_async/j103238451074.3795', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:07:25.675905 | orchestrator | 2026-04-17 05:07:25.675916 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-17 05:07:25.675927 | orchestrator | Friday 17 April 2026 05:07:20 +0000 (0:00:46.854) 0:02:55.118 ********** 2026-04-17 05:07:25.675938 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 05:07:25.675956 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 05:08:35.201918 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 05:08:35.202088 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 05:08:35.202106 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 05:08:35.202118 | orchestrator | 2026-04-17 05:08:35.202131 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-17 05:08:35.202142 | orchestrator | Friday 17 April 2026 05:07:25 +0000 (0:00:04.935) 0:03:00.053 ********** 2026-04-17 05:08:35.202154 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-17 05:08:35.202168 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j831669197508.3900', 'results_file': '/ansible/.ansible_async/j831669197508.3900', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202182 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j74363849768.3925', 'results_file': '/ansible/.ansible_async/j74363849768.3925', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202218 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j815528796775.3950', 'results_file': '/ansible/.ansible_async/j815528796775.3950', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202229 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j993636864099.3975', 'results_file': '/ansible/.ansible_async/j993636864099.3975', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202257 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j589606800544.4000', 'results_file': '/ansible/.ansible_async/j589606800544.4000', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202269 | orchestrator | 2026-04-17 05:08:35.202280 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-17 05:08:35.202291 | orchestrator | Friday 17 April 2026 05:07:35 +0000 (0:00:09.615) 0:03:09.669 ********** 2026-04-17 05:08:35.202302 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 05:08:35.202312 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 05:08:35.202323 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 05:08:35.202333 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 05:08:35.202344 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 05:08:35.202354 | orchestrator | 2026-04-17 05:08:35.202366 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-17 05:08:35.202377 | orchestrator | Friday 17 April 2026 05:07:40 +0000 (0:00:04.936) 0:03:14.606 ********** 2026-04-17 05:08:35.202387 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-17 05:08:35.202398 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j116691509177.4077', 'results_file': '/ansible/.ansible_async/j116691509177.4077', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202410 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j554229203397.4102', 'results_file': '/ansible/.ansible_async/j554229203397.4102', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202424 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j791958079440.4128', 'results_file': '/ansible/.ansible_async/j791958079440.4128', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202442 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j154004013424.4154', 'results_file': '/ansible/.ansible_async/j154004013424.4154', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202474 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j438833739419.4180', 'results_file': '/ansible/.ansible_async/j438833739419.4180', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 05:08:35.202487 | orchestrator | 2026-04-17 05:08:35.202501 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-17 05:08:35.202514 | orchestrator | Friday 17 April 2026 05:07:50 +0000 (0:00:10.447) 0:03:25.053 ********** 2026-04-17 05:08:35.202526 | orchestrator | changed: [localhost] 2026-04-17 05:08:35.202540 | orchestrator | 2026-04-17 05:08:35.202554 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-17 05:08:35.202574 | orchestrator | Friday 17 April 2026 05:07:56 +0000 (0:00:06.314) 0:03:31.368 ********** 2026-04-17 05:08:35.202586 | orchestrator | changed: [localhost] 2026-04-17 05:08:35.202599 | orchestrator | 2026-04-17 05:08:35.202611 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-17 05:08:35.202624 | orchestrator | Friday 17 April 2026 05:08:10 +0000 (0:00:13.514) 0:03:44.883 ********** 2026-04-17 05:08:35.202638 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 05:08:35.202650 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 05:08:35.202662 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 05:08:35.202675 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 05:08:35.202688 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 05:08:35.202701 | orchestrator | 2026-04-17 05:08:35.202713 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-17 05:08:35.202725 | orchestrator | Friday 17 April 2026 05:08:34 +0000 (0:00:24.250) 0:04:09.133 ********** 2026-04-17 05:08:35.202738 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-17 05:08:35.202751 | orchestrator |  "msg": "test: 192.168.112.133" 2026-04-17 05:08:35.202764 | orchestrator | } 2026-04-17 05:08:35.202778 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-17 05:08:35.202790 | orchestrator |  "msg": "test-1: 192.168.112.109" 2026-04-17 05:08:35.202801 | orchestrator | } 2026-04-17 05:08:35.202812 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-17 05:08:35.202823 | orchestrator |  "msg": "test-2: 192.168.112.153" 2026-04-17 05:08:35.202855 | orchestrator | } 2026-04-17 05:08:35.202867 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-17 05:08:35.202878 | orchestrator |  "msg": "test-3: 192.168.112.122" 2026-04-17 05:08:35.202888 | orchestrator | } 2026-04-17 05:08:35.202899 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-17 05:08:35.202909 | orchestrator |  "msg": "test-4: 192.168.112.193" 2026-04-17 05:08:35.202920 | orchestrator | } 2026-04-17 05:08:35.202931 | orchestrator | 2026-04-17 05:08:35.202942 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:08:35.202954 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 05:08:35.202965 | orchestrator | 2026-04-17 05:08:35.202976 | orchestrator | 2026-04-17 05:08:35.202987 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:08:35.202998 | orchestrator | Friday 17 April 2026 05:08:34 +0000 (0:00:00.143) 0:04:09.276 ********** 2026-04-17 05:08:35.203009 | orchestrator | =============================================================================== 2026-04-17 05:08:35.203019 | orchestrator | Wait for instance creation to complete --------------------------------- 46.85s 2026-04-17 05:08:35.203030 | orchestrator | Create test routers ---------------------------------------------------- 28.45s 2026-04-17 05:08:35.203041 | orchestrator | Create floating ip addresses ------------------------------------------- 24.25s 2026-04-17 05:08:35.203052 | orchestrator | Create test subnets ---------------------------------------------------- 15.50s 2026-04-17 05:08:35.203062 | orchestrator | Create test networks --------------------------------------------------- 13.98s 2026-04-17 05:08:35.203073 | orchestrator | Attach test volume ----------------------------------------------------- 13.51s 2026-04-17 05:08:35.203084 | orchestrator | Add member roles to user test ------------------------------------------ 11.87s 2026-04-17 05:08:35.203094 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.45s 2026-04-17 05:08:35.203105 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.62s 2026-04-17 05:08:35.203115 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.96s 2026-04-17 05:08:35.203126 | orchestrator | Create test volume ------------------------------------------------------ 6.31s 2026-04-17 05:08:35.203144 | orchestrator | Create test instances --------------------------------------------------- 5.22s 2026-04-17 05:08:35.203154 | orchestrator | Add tag to instances ---------------------------------------------------- 4.94s 2026-04-17 05:08:35.203165 | orchestrator | Add metadata to instances ----------------------------------------------- 4.94s 2026-04-17 05:08:35.203175 | orchestrator | Create ssh security group ----------------------------------------------- 4.78s 2026-04-17 05:08:35.203186 | orchestrator | Create test-admin user -------------------------------------------------- 4.48s 2026-04-17 05:08:35.203197 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.41s 2026-04-17 05:08:35.203214 | orchestrator | Create test server group ------------------------------------------------ 4.31s 2026-04-17 05:08:35.203232 | orchestrator | Create test user -------------------------------------------------------- 4.26s 2026-04-17 05:08:35.203250 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.09s 2026-04-17 05:08:35.627188 | orchestrator | + server_list 2026-04-17 05:08:35.627280 | orchestrator | + openstack --os-cloud test server list 2026-04-17 05:08:39.332356 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 05:08:39.332431 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-17 05:08:39.332438 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 05:08:39.332443 | orchestrator | | ade52472-1613-40d8-aa9f-b5a0cc6f0d77 | test-4 | ACTIVE | test-3=192.168.112.193, 192.168.202.47 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 05:08:39.332448 | orchestrator | | 10df8f62-29ae-4e8f-92d4-b78dfab79c06 | test-2 | ACTIVE | test-2=192.168.112.153, 192.168.201.130 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 05:08:39.332453 | orchestrator | | 42216a94-69f4-42ce-a51a-18de589b5980 | test-1 | ACTIVE | test-1=192.168.112.109, 192.168.200.11 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 05:08:39.332458 | orchestrator | | 49af2b6b-55c5-42f8-bf30-a81aa8a4e60b | test-3 | ACTIVE | test-2=192.168.112.122, 192.168.201.105 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 05:08:39.332463 | orchestrator | | b8742be6-41cf-41ee-8066-4d495a4e1434 | test | ACTIVE | test-1=192.168.112.133, 192.168.200.69 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 05:08:39.332468 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 05:08:39.682324 | orchestrator | + openstack --os-cloud test server show test 2026-04-17 05:08:42.968132 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:42.968221 | orchestrator | | Field | Value | 2026-04-17 05:08:42.968232 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:42.968240 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 05:08:42.968260 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 05:08:42.968267 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 05:08:42.968278 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-17 05:08:42.968285 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 05:08:42.968292 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 05:08:42.968311 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 05:08:42.968318 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 05:08:42.968325 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 05:08:42.968332 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 05:08:42.968395 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 05:08:42.968403 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 05:08:42.968410 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 05:08:42.968420 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 05:08:42.968427 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 05:08:42.968434 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:04.000000 | 2026-04-17 05:08:42.968445 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 05:08:42.968472 | orchestrator | | accessIPv4 | | 2026-04-17 05:08:42.968480 | orchestrator | | accessIPv6 | | 2026-04-17 05:08:42.968492 | orchestrator | | addresses | test-1=192.168.112.133, 192.168.200.69 | 2026-04-17 05:08:42.968499 | orchestrator | | config_drive | | 2026-04-17 05:08:42.968506 | orchestrator | | created | 2026-04-17T05:06:38Z | 2026-04-17 05:08:42.968513 | orchestrator | | description | None | 2026-04-17 05:08:42.968523 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 05:08:42.968530 | orchestrator | | hostId | 61880b69f50018901e2e10e887010d9cd89861e535470105893d2ad2 | 2026-04-17 05:08:42.968537 | orchestrator | | host_status | None | 2026-04-17 05:08:42.968550 | orchestrator | | id | b8742be6-41cf-41ee-8066-4d495a4e1434 | 2026-04-17 05:08:42.968557 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 05:08:42.968569 | orchestrator | | key_name | test | 2026-04-17 05:08:42.968576 | orchestrator | | locked | False | 2026-04-17 05:08:42.968583 | orchestrator | | locked_reason | None | 2026-04-17 05:08:42.968590 | orchestrator | | name | test | 2026-04-17 05:08:42.968596 | orchestrator | | pinned_availability_zone | None | 2026-04-17 05:08:42.968603 | orchestrator | | progress | 0 | 2026-04-17 05:08:42.968615 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 05:08:42.968623 | orchestrator | | properties | hostname='test' | 2026-04-17 05:08:42.968635 | orchestrator | | security_groups | name='icmp' | 2026-04-17 05:08:42.968642 | orchestrator | | | name='ssh' | 2026-04-17 05:08:42.968653 | orchestrator | | server_groups | None | 2026-04-17 05:08:42.968660 | orchestrator | | status | ACTIVE | 2026-04-17 05:08:42.968667 | orchestrator | | tags | test | 2026-04-17 05:08:42.968673 | orchestrator | | trusted_image_certificates | None | 2026-04-17 05:08:42.968680 | orchestrator | | updated | 2026-04-17T05:07:26Z | 2026-04-17 05:08:42.968693 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 05:08:42.968700 | orchestrator | | volumes_attached | delete_on_termination='True', id='8053a2c6-4ff5-4dc7-9b75-b7bf147b4af2' | 2026-04-17 05:08:42.968707 | orchestrator | | | delete_on_termination='False', id='3ae2f902-71fb-4320-883a-20f3ed544819' | 2026-04-17 05:08:42.973174 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:43.298544 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-17 05:08:46.372565 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:46.372677 | orchestrator | | Field | Value | 2026-04-17 05:08:46.372695 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:46.372707 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 05:08:46.372719 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 05:08:46.372731 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 05:08:46.372758 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-17 05:08:46.372770 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 05:08:46.372781 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 05:08:46.372833 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 05:08:46.372846 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 05:08:46.372954 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 05:08:46.372966 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 05:08:46.373064 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 05:08:46.373080 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 05:08:46.373093 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 05:08:46.373114 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 05:08:46.373129 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 05:08:46.373155 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:04.000000 | 2026-04-17 05:08:46.373179 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 05:08:46.373192 | orchestrator | | accessIPv4 | | 2026-04-17 05:08:46.373205 | orchestrator | | accessIPv6 | | 2026-04-17 05:08:46.373217 | orchestrator | | addresses | test-1=192.168.112.109, 192.168.200.11 | 2026-04-17 05:08:46.373230 | orchestrator | | config_drive | | 2026-04-17 05:08:46.373243 | orchestrator | | created | 2026-04-17T05:06:39Z | 2026-04-17 05:08:46.373261 | orchestrator | | description | None | 2026-04-17 05:08:46.373274 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 05:08:46.373294 | orchestrator | | hostId | 61880b69f50018901e2e10e887010d9cd89861e535470105893d2ad2 | 2026-04-17 05:08:46.373320 | orchestrator | | host_status | None | 2026-04-17 05:08:46.373342 | orchestrator | | id | 42216a94-69f4-42ce-a51a-18de589b5980 | 2026-04-17 05:08:46.373355 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 05:08:46.373368 | orchestrator | | key_name | test | 2026-04-17 05:08:46.373381 | orchestrator | | locked | False | 2026-04-17 05:08:46.373395 | orchestrator | | locked_reason | None | 2026-04-17 05:08:46.373407 | orchestrator | | name | test-1 | 2026-04-17 05:08:46.373426 | orchestrator | | pinned_availability_zone | None | 2026-04-17 05:08:46.373437 | orchestrator | | progress | 0 | 2026-04-17 05:08:46.373458 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 05:08:46.373470 | orchestrator | | properties | hostname='test-1' | 2026-04-17 05:08:46.373488 | orchestrator | | security_groups | name='icmp' | 2026-04-17 05:08:46.373499 | orchestrator | | | name='ssh' | 2026-04-17 05:08:46.373510 | orchestrator | | server_groups | None | 2026-04-17 05:08:46.373521 | orchestrator | | status | ACTIVE | 2026-04-17 05:08:46.373532 | orchestrator | | tags | test | 2026-04-17 05:08:46.373543 | orchestrator | | trusted_image_certificates | None | 2026-04-17 05:08:46.373555 | orchestrator | | updated | 2026-04-17T05:07:27Z | 2026-04-17 05:08:46.373573 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 05:08:46.373584 | orchestrator | | volumes_attached | delete_on_termination='True', id='16b0605e-ac56-49f1-a4b1-5a304824d63e' | 2026-04-17 05:08:46.376900 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:46.751069 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-17 05:08:49.762830 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:49.763017 | orchestrator | | Field | Value | 2026-04-17 05:08:49.763039 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:49.763051 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 05:08:49.763063 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 05:08:49.763074 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 05:08:49.763113 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-17 05:08:49.763125 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 05:08:49.763136 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 05:08:49.763167 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 05:08:49.763179 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 05:08:49.763190 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 05:08:49.763202 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 05:08:49.763213 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 05:08:49.763224 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 05:08:49.763243 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 05:08:49.763259 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 05:08:49.763271 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 05:08:49.763282 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:04.000000 | 2026-04-17 05:08:49.763300 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 05:08:49.763312 | orchestrator | | accessIPv4 | | 2026-04-17 05:08:49.763323 | orchestrator | | accessIPv6 | | 2026-04-17 05:08:49.763334 | orchestrator | | addresses | test-2=192.168.112.153, 192.168.201.130 | 2026-04-17 05:08:49.763345 | orchestrator | | config_drive | | 2026-04-17 05:08:49.763370 | orchestrator | | created | 2026-04-17T05:06:39Z | 2026-04-17 05:08:49.763381 | orchestrator | | description | None | 2026-04-17 05:08:49.763397 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 05:08:49.763409 | orchestrator | | hostId | 9a6fc80b5ef5804c541967b849f5345d94f707e64997816cdf8509da | 2026-04-17 05:08:49.763420 | orchestrator | | host_status | None | 2026-04-17 05:08:49.763438 | orchestrator | | id | 10df8f62-29ae-4e8f-92d4-b78dfab79c06 | 2026-04-17 05:08:49.763449 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 05:08:49.763461 | orchestrator | | key_name | test | 2026-04-17 05:08:49.763472 | orchestrator | | locked | False | 2026-04-17 05:08:49.763483 | orchestrator | | locked_reason | None | 2026-04-17 05:08:49.763501 | orchestrator | | name | test-2 | 2026-04-17 05:08:49.763512 | orchestrator | | pinned_availability_zone | None | 2026-04-17 05:08:49.763527 | orchestrator | | progress | 0 | 2026-04-17 05:08:49.763539 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 05:08:49.763551 | orchestrator | | properties | hostname='test-2' | 2026-04-17 05:08:49.763569 | orchestrator | | security_groups | name='icmp' | 2026-04-17 05:08:49.763580 | orchestrator | | | name='ssh' | 2026-04-17 05:08:49.763591 | orchestrator | | server_groups | None | 2026-04-17 05:08:49.763603 | orchestrator | | status | ACTIVE | 2026-04-17 05:08:49.763620 | orchestrator | | tags | test | 2026-04-17 05:08:49.763631 | orchestrator | | trusted_image_certificates | None | 2026-04-17 05:08:49.763643 | orchestrator | | updated | 2026-04-17T05:07:28Z | 2026-04-17 05:08:49.763658 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 05:08:49.763670 | orchestrator | | volumes_attached | delete_on_termination='True', id='578663b6-c726-4521-9fb0-d89e204fe08b' | 2026-04-17 05:08:49.770088 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:50.129745 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-17 05:08:53.178263 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:53.178393 | orchestrator | | Field | Value | 2026-04-17 05:08:53.178410 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:53.178444 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 05:08:53.178456 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 05:08:53.178468 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 05:08:53.178492 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-17 05:08:53.178504 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 05:08:53.178516 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 05:08:53.178545 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 05:08:53.178558 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 05:08:53.178569 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 05:08:53.178588 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 05:08:53.178599 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 05:08:53.178618 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 05:08:53.178637 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 05:08:53.178656 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 05:08:53.178673 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 05:08:53.178691 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:04.000000 | 2026-04-17 05:08:53.178720 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 05:08:53.178740 | orchestrator | | accessIPv4 | | 2026-04-17 05:08:53.178767 | orchestrator | | accessIPv6 | | 2026-04-17 05:08:53.178785 | orchestrator | | addresses | test-2=192.168.112.122, 192.168.201.105 | 2026-04-17 05:08:53.179281 | orchestrator | | config_drive | | 2026-04-17 05:08:53.179304 | orchestrator | | created | 2026-04-17T05:06:39Z | 2026-04-17 05:08:53.179316 | orchestrator | | description | None | 2026-04-17 05:08:53.179337 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 05:08:53.179357 | orchestrator | | hostId | 9a6fc80b5ef5804c541967b849f5345d94f707e64997816cdf8509da | 2026-04-17 05:08:53.179375 | orchestrator | | host_status | None | 2026-04-17 05:08:53.179407 | orchestrator | | id | 49af2b6b-55c5-42f8-bf30-a81aa8a4e60b | 2026-04-17 05:08:53.179428 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 05:08:53.179463 | orchestrator | | key_name | test | 2026-04-17 05:08:53.179484 | orchestrator | | locked | False | 2026-04-17 05:08:53.179511 | orchestrator | | locked_reason | None | 2026-04-17 05:08:53.179531 | orchestrator | | name | test-3 | 2026-04-17 05:08:53.179549 | orchestrator | | pinned_availability_zone | None | 2026-04-17 05:08:53.179567 | orchestrator | | progress | 0 | 2026-04-17 05:08:53.179585 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 05:08:53.179604 | orchestrator | | properties | hostname='test-3' | 2026-04-17 05:08:53.179634 | orchestrator | | security_groups | name='icmp' | 2026-04-17 05:08:53.179665 | orchestrator | | | name='ssh' | 2026-04-17 05:08:53.179685 | orchestrator | | server_groups | None | 2026-04-17 05:08:53.179705 | orchestrator | | status | ACTIVE | 2026-04-17 05:08:53.179731 | orchestrator | | tags | test | 2026-04-17 05:08:53.179751 | orchestrator | | trusted_image_certificates | None | 2026-04-17 05:08:53.179770 | orchestrator | | updated | 2026-04-17T05:07:28Z | 2026-04-17 05:08:53.179789 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 05:08:53.179807 | orchestrator | | volumes_attached | delete_on_termination='True', id='21bca45f-c565-4dce-8d3d-7605cd53b89e' | 2026-04-17 05:08:53.183730 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:53.530058 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-17 05:08:56.421188 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:56.421294 | orchestrator | | Field | Value | 2026-04-17 05:08:56.421315 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:56.421331 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 05:08:56.421363 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 05:08:56.421379 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 05:08:56.421392 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-17 05:08:56.421406 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 05:08:56.421418 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 05:08:56.421477 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 05:08:56.421492 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 05:08:56.421506 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 05:08:56.421520 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 05:08:56.421535 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 05:08:56.421555 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 05:08:56.421570 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 05:08:56.421585 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 05:08:56.421597 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 05:08:56.421619 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:06.000000 | 2026-04-17 05:08:56.421641 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 05:08:56.421656 | orchestrator | | accessIPv4 | | 2026-04-17 05:08:56.421672 | orchestrator | | accessIPv6 | | 2026-04-17 05:08:56.421686 | orchestrator | | addresses | test-3=192.168.112.193, 192.168.202.47 | 2026-04-17 05:08:56.421699 | orchestrator | | config_drive | | 2026-04-17 05:08:56.421719 | orchestrator | | created | 2026-04-17T05:06:41Z | 2026-04-17 05:08:56.421734 | orchestrator | | description | None | 2026-04-17 05:08:56.421750 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 05:08:56.421766 | orchestrator | | hostId | 9a6fc80b5ef5804c541967b849f5345d94f707e64997816cdf8509da | 2026-04-17 05:08:56.421790 | orchestrator | | host_status | None | 2026-04-17 05:08:56.421812 | orchestrator | | id | ade52472-1613-40d8-aa9f-b5a0cc6f0d77 | 2026-04-17 05:08:56.421827 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 05:08:56.421844 | orchestrator | | key_name | test | 2026-04-17 05:08:56.421860 | orchestrator | | locked | False | 2026-04-17 05:08:56.421899 | orchestrator | | locked_reason | None | 2026-04-17 05:08:56.421918 | orchestrator | | name | test-4 | 2026-04-17 05:08:56.421933 | orchestrator | | pinned_availability_zone | None | 2026-04-17 05:08:56.421950 | orchestrator | | progress | 0 | 2026-04-17 05:08:56.421975 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 05:08:56.421989 | orchestrator | | properties | hostname='test-4' | 2026-04-17 05:08:56.422011 | orchestrator | | security_groups | name='icmp' | 2026-04-17 05:08:56.422118 | orchestrator | | | name='ssh' | 2026-04-17 05:08:56.422133 | orchestrator | | server_groups | None | 2026-04-17 05:08:56.422146 | orchestrator | | status | ACTIVE | 2026-04-17 05:08:56.422159 | orchestrator | | tags | test | 2026-04-17 05:08:56.422178 | orchestrator | | trusted_image_certificates | None | 2026-04-17 05:08:56.422192 | orchestrator | | updated | 2026-04-17T05:07:29Z | 2026-04-17 05:08:56.422216 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 05:08:56.422230 | orchestrator | | volumes_attached | delete_on_termination='True', id='f71a7bd0-47ba-449a-90f3-77f4f677840c' | 2026-04-17 05:08:56.426269 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 05:08:56.742329 | orchestrator | + server_ping 2026-04-17 05:08:56.742995 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-17 05:08:56.743256 | orchestrator | ++ tr -d '\r' 2026-04-17 05:08:59.625179 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 05:08:59.625288 | orchestrator | + ping -c3 192.168.112.193 2026-04-17 05:08:59.638456 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-17 05:08:59.638531 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=6.43 ms 2026-04-17 05:09:00.635711 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=1.80 ms 2026-04-17 05:09:01.637315 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.53 ms 2026-04-17 05:09:01.637418 | orchestrator | 2026-04-17 05:09:01.637435 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-17 05:09:01.637447 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 05:09:01.637458 | orchestrator | rtt min/avg/max/mdev = 1.534/3.254/6.425/2.244 ms 2026-04-17 05:09:01.637835 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 05:09:01.637860 | orchestrator | + ping -c3 192.168.112.153 2026-04-17 05:09:01.647224 | orchestrator | PING 192.168.112.153 (192.168.112.153) 56(84) bytes of data. 2026-04-17 05:09:01.647252 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=1 ttl=63 time=4.98 ms 2026-04-17 05:09:02.646284 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=2 ttl=63 time=2.17 ms 2026-04-17 05:09:03.648192 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=3 ttl=63 time=1.81 ms 2026-04-17 05:09:03.648293 | orchestrator | 2026-04-17 05:09:03.648308 | orchestrator | --- 192.168.112.153 ping statistics --- 2026-04-17 05:09:03.648321 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 05:09:03.648332 | orchestrator | rtt min/avg/max/mdev = 1.808/2.984/4.980/1.418 ms 2026-04-17 05:09:03.648344 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 05:09:03.648355 | orchestrator | + ping -c3 192.168.112.133 2026-04-17 05:09:03.660995 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-17 05:09:03.661041 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=7.96 ms 2026-04-17 05:09:04.656412 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=1.80 ms 2026-04-17 05:09:05.657926 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.51 ms 2026-04-17 05:09:05.658112 | orchestrator | 2026-04-17 05:09:05.658133 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-17 05:09:05.658146 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 05:09:05.658184 | orchestrator | rtt min/avg/max/mdev = 1.509/3.757/7.964/2.977 ms 2026-04-17 05:09:05.658353 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 05:09:05.658374 | orchestrator | + ping -c3 192.168.112.109 2026-04-17 05:09:05.674340 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-17 05:09:05.674405 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=11.0 ms 2026-04-17 05:09:06.666696 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=1.90 ms 2026-04-17 05:09:07.666558 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.55 ms 2026-04-17 05:09:07.666702 | orchestrator | 2026-04-17 05:09:07.666756 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-17 05:09:07.666780 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 05:09:07.666801 | orchestrator | rtt min/avg/max/mdev = 1.548/4.823/11.022/4.385 ms 2026-04-17 05:09:07.667926 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 05:09:07.668016 | orchestrator | + ping -c3 192.168.112.122 2026-04-17 05:09:07.680272 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2026-04-17 05:09:07.680355 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=7.42 ms 2026-04-17 05:09:08.677227 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.35 ms 2026-04-17 05:09:09.678626 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.49 ms 2026-04-17 05:09:09.678728 | orchestrator | 2026-04-17 05:09:09.678745 | orchestrator | --- 192.168.112.122 ping statistics --- 2026-04-17 05:09:09.678758 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 05:09:09.678769 | orchestrator | rtt min/avg/max/mdev = 1.494/3.753/7.416/2.613 ms 2026-04-17 05:09:09.678780 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-17 05:09:09.883219 | orchestrator | ok: Runtime: 0:10:09.472780 2026-04-17 05:09:09.940462 | 2026-04-17 05:09:09.940628 | TASK [Run tempest] 2026-04-17 05:09:10.479864 | orchestrator | skipping: Conditional result was False 2026-04-17 05:09:10.500539 | 2026-04-17 05:09:10.500735 | TASK [Check prometheus alert status] 2026-04-17 05:09:11.038946 | orchestrator | skipping: Conditional result was False 2026-04-17 05:09:11.055062 | 2026-04-17 05:09:11.055259 | PLAY [Upgrade testbed] 2026-04-17 05:09:11.071254 | 2026-04-17 05:09:11.071408 | TASK [Print next ceph version] 2026-04-17 05:09:11.162654 | orchestrator | ok 2026-04-17 05:09:11.172643 | 2026-04-17 05:09:11.172775 | TASK [Print next openstack version] 2026-04-17 05:09:11.243092 | orchestrator | ok 2026-04-17 05:09:11.257032 | 2026-04-17 05:09:11.257332 | TASK [Print next manager version] 2026-04-17 05:09:11.324577 | orchestrator | ok 2026-04-17 05:09:11.336549 | 2026-04-17 05:09:11.336704 | TASK [Set cloud fact (Zuul deployment)] 2026-04-17 05:09:11.383059 | orchestrator | ok 2026-04-17 05:09:11.394869 | 2026-04-17 05:09:11.395008 | TASK [Set cloud fact (local deployment)] 2026-04-17 05:09:11.421145 | orchestrator | skipping: Conditional result was False 2026-04-17 05:09:11.436945 | 2026-04-17 05:09:11.437165 | TASK [Fetch manager address] 2026-04-17 05:09:11.702219 | orchestrator | ok 2026-04-17 05:09:11.712748 | 2026-04-17 05:09:11.712885 | TASK [Set manager_host address] 2026-04-17 05:09:11.778818 | orchestrator | ok 2026-04-17 05:09:11.791800 | 2026-04-17 05:09:11.791948 | TASK [Run upgrade] 2026-04-17 05:09:12.472240 | orchestrator | + set -e 2026-04-17 05:09:12.472391 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-17 05:09:12.472412 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-17 05:09:12.472422 | orchestrator | + CEPH_VERSION=reef 2026-04-17 05:09:12.472430 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-17 05:09:12.472438 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-17 05:09:12.472447 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0 reef 2024.2 kolla/release' 2026-04-17 05:09:12.481947 | orchestrator | + set -e 2026-04-17 05:09:12.482079 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 05:09:12.482101 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 05:09:12.482121 | orchestrator | ++ INTERACTIVE=false 2026-04-17 05:09:12.482132 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 05:09:12.482146 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 05:09:12.483205 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-04-17 05:09:12.528826 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-04-17 05:09:12.529582 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-17 05:09:12.564810 | orchestrator | 2026-04-17 05:09:12.564940 | orchestrator | # UPGRADE MANAGER 2026-04-17 05:09:12.564962 | orchestrator | 2026-04-17 05:09:12.564974 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-04-17 05:09:12.564986 | orchestrator | + echo 2026-04-17 05:09:12.565000 | orchestrator | + echo '# UPGRADE MANAGER' 2026-04-17 05:09:12.565011 | orchestrator | + echo 2026-04-17 05:09:12.565022 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-17 05:09:12.565033 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-17 05:09:12.565044 | orchestrator | + CEPH_VERSION=reef 2026-04-17 05:09:12.565055 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-17 05:09:12.565066 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-17 05:09:12.565077 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-17 05:09:12.569668 | orchestrator | + set -e 2026-04-17 05:09:12.569743 | orchestrator | + VERSION=10.0.0 2026-04-17 05:09:12.569765 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-17 05:09:12.575211 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-17 05:09:12.575270 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-17 05:09:12.581161 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-17 05:09:12.586210 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-17 05:09:12.593520 | orchestrator | /opt/configuration ~ 2026-04-17 05:09:12.593568 | orchestrator | + set -e 2026-04-17 05:09:12.593577 | orchestrator | + pushd /opt/configuration 2026-04-17 05:09:12.593585 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 05:09:12.593595 | orchestrator | + source /opt/venv/bin/activate 2026-04-17 05:09:12.594761 | orchestrator | ++ deactivate nondestructive 2026-04-17 05:09:12.594778 | orchestrator | ++ '[' -n '' ']' 2026-04-17 05:09:12.594786 | orchestrator | ++ '[' -n '' ']' 2026-04-17 05:09:12.594793 | orchestrator | ++ hash -r 2026-04-17 05:09:12.594801 | orchestrator | ++ '[' -n '' ']' 2026-04-17 05:09:12.594808 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-17 05:09:12.594815 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-17 05:09:12.594826 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-17 05:09:12.595080 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-17 05:09:12.595093 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-17 05:09:12.595101 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-17 05:09:12.595108 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-17 05:09:12.595116 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 05:09:12.595179 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 05:09:12.595189 | orchestrator | ++ export PATH 2026-04-17 05:09:12.595197 | orchestrator | ++ '[' -n '' ']' 2026-04-17 05:09:12.595258 | orchestrator | ++ '[' -z '' ']' 2026-04-17 05:09:12.595268 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-17 05:09:12.595276 | orchestrator | ++ PS1='(venv) ' 2026-04-17 05:09:12.595283 | orchestrator | ++ export PS1 2026-04-17 05:09:12.595290 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-17 05:09:12.595297 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-17 05:09:12.595304 | orchestrator | ++ hash -r 2026-04-17 05:09:12.595315 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-17 05:09:13.729099 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-17 05:09:13.729842 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-17 05:09:13.731554 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-17 05:09:13.732714 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-17 05:09:13.734075 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.1) 2026-04-17 05:09:13.744118 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-17 05:09:13.745618 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-17 05:09:13.746675 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-17 05:09:13.748111 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-17 05:09:13.785927 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-17 05:09:13.787486 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-17 05:09:13.789598 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-17 05:09:13.790805 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-17 05:09:13.794880 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-17 05:09:14.078806 | orchestrator | ++ which gilt 2026-04-17 05:09:14.080201 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-17 05:09:14.080250 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-17 05:09:14.354341 | orchestrator | osism.cfg-generics: 2026-04-17 05:09:14.475224 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-17 05:09:14.476339 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-17 05:09:14.483332 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-17 05:09:14.483408 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-17 05:09:15.586700 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-17 05:09:15.599843 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-17 05:09:15.981321 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-17 05:09:16.038582 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 05:09:16.038689 | orchestrator | + deactivate 2026-04-17 05:09:16.038706 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-17 05:09:16.038720 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 05:09:16.038731 | orchestrator | + export PATH 2026-04-17 05:09:16.038743 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-17 05:09:16.038754 | orchestrator | + '[' -n '' ']' 2026-04-17 05:09:16.038765 | orchestrator | + hash -r 2026-04-17 05:09:16.038776 | orchestrator | + '[' -n '' ']' 2026-04-17 05:09:16.038787 | orchestrator | + unset VIRTUAL_ENV 2026-04-17 05:09:16.038810 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-17 05:09:16.038822 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-17 05:09:16.038833 | orchestrator | + unset -f deactivate 2026-04-17 05:09:16.038844 | orchestrator | + popd 2026-04-17 05:09:16.038889 | orchestrator | ~ 2026-04-17 05:09:16.041285 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-17 05:09:16.041359 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-17 05:09:16.049722 | orchestrator | + set -e 2026-04-17 05:09:16.049766 | orchestrator | + NAMESPACE=kolla/release 2026-04-17 05:09:16.049778 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-17 05:09:16.054992 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-17 05:09:16.064064 | orchestrator | /opt/configuration ~ 2026-04-17 05:09:16.064088 | orchestrator | + set -e 2026-04-17 05:09:16.064096 | orchestrator | + pushd /opt/configuration 2026-04-17 05:09:16.064102 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 05:09:16.064109 | orchestrator | + source /opt/venv/bin/activate 2026-04-17 05:09:16.064226 | orchestrator | ++ deactivate nondestructive 2026-04-17 05:09:16.064306 | orchestrator | ++ '[' -n '' ']' 2026-04-17 05:09:16.064353 | orchestrator | ++ '[' -n '' ']' 2026-04-17 05:09:16.064402 | orchestrator | ++ hash -r 2026-04-17 05:09:16.064476 | orchestrator | ++ '[' -n '' ']' 2026-04-17 05:09:16.064484 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-17 05:09:16.064579 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-17 05:09:16.064593 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-17 05:09:16.064752 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-17 05:09:16.064761 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-17 05:09:16.064767 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-17 05:09:16.064780 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-17 05:09:16.064788 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 05:09:16.064834 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 05:09:16.064842 | orchestrator | ++ export PATH 2026-04-17 05:09:16.064928 | orchestrator | ++ '[' -n '' ']' 2026-04-17 05:09:16.064937 | orchestrator | ++ '[' -z '' ']' 2026-04-17 05:09:16.064944 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-17 05:09:16.065019 | orchestrator | ++ PS1='(venv) ' 2026-04-17 05:09:16.065028 | orchestrator | ++ export PS1 2026-04-17 05:09:16.065035 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-17 05:09:16.065081 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-17 05:09:16.065115 | orchestrator | ++ hash -r 2026-04-17 05:09:16.065194 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-17 05:09:16.619188 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-17 05:09:16.620126 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-17 05:09:16.621516 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-17 05:09:16.622853 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-17 05:09:16.624350 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.1) 2026-04-17 05:09:16.635855 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-17 05:09:16.637316 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-17 05:09:16.638348 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-17 05:09:16.639746 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-17 05:09:16.675174 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-17 05:09:16.676758 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-17 05:09:16.678346 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-17 05:09:16.679745 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-17 05:09:16.683534 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-17 05:09:16.905760 | orchestrator | ++ which gilt 2026-04-17 05:09:16.907117 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-17 05:09:16.907141 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-17 05:09:17.105143 | orchestrator | osism.cfg-generics: 2026-04-17 05:09:17.209849 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-17 05:09:17.209963 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-17 05:09:17.210209 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-17 05:09:17.210289 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-17 05:09:17.879349 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-17 05:09:17.890067 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-17 05:09:18.259752 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-17 05:09:18.320745 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 05:09:18.320829 | orchestrator | + deactivate 2026-04-17 05:09:18.320841 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-17 05:09:18.320852 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 05:09:18.320860 | orchestrator | + export PATH 2026-04-17 05:09:18.320869 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-17 05:09:18.320877 | orchestrator | + '[' -n '' ']' 2026-04-17 05:09:18.320896 | orchestrator | + hash -r 2026-04-17 05:09:18.320933 | orchestrator | + '[' -n '' ']' 2026-04-17 05:09:18.320944 | orchestrator | + unset VIRTUAL_ENV 2026-04-17 05:09:18.320953 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-17 05:09:18.320961 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-17 05:09:18.320969 | orchestrator | + unset -f deactivate 2026-04-17 05:09:18.321121 | orchestrator | ~ 2026-04-17 05:09:18.321139 | orchestrator | + popd 2026-04-17 05:09:18.323551 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-04-17 05:09:18.383412 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 05:09:18.383485 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-17 05:09:18.383772 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-17 05:09:18.466862 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 05:09:18.466959 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-17 05:09:18.472498 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-17 05:09:18.479235 | orchestrator | ++ semver v0.20251130.0 9.5.0 2026-04-17 05:09:18.537583 | orchestrator | + [[ -1 -le 0 ]] 2026-04-17 05:09:18.537655 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-17 05:09:18.538509 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-17 05:09:18.636487 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 05:09:18.636591 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-04-17 05:09:18.639063 | orchestrator | +++ semver 2024.2 2024.2 2026-04-17 05:09:18.719440 | orchestrator | ++ '[' 0 -le 0 ']' 2026-04-17 05:09:18.720250 | orchestrator | +++ semver 2024.2 2025.1 2026-04-17 05:09:18.786114 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-04-17 05:09:18.786194 | orchestrator | ++ echo false 2026-04-17 05:09:18.786211 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-04-17 05:09:18.786220 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-17 05:09:18.786227 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-04-17 05:09:18.786252 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-04-17 05:09:18.786287 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-04-17 05:09:18.791947 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-04-17 05:09:18.792068 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-04-17 05:09:18.810930 | orchestrator | export RABBITMQ3TO4=true 2026-04-17 05:09:18.814036 | orchestrator | + osism update manager 2026-04-17 05:09:24.826559 | orchestrator | Collecting uv 2026-04-17 05:09:24.911236 | orchestrator | Downloading uv-0.11.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-04-17 05:09:24.928348 | orchestrator | Downloading uv-0.11.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.9 MB) 2026-04-17 05:09:25.707018 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.9/24.9 MB 35.3 MB/s eta 0:00:00 2026-04-17 05:09:25.776707 | orchestrator | Installing collected packages: uv 2026-04-17 05:09:26.286260 | orchestrator | Successfully installed uv-0.11.7 2026-04-17 05:09:27.055999 | orchestrator | Resolved 11 packages in 446ms 2026-04-17 05:09:27.086615 | orchestrator | Downloading cryptography (4.3MiB) 2026-04-17 05:09:27.086892 | orchestrator | Downloading ansible-core (2.1MiB) 2026-04-17 05:09:27.087081 | orchestrator | Downloading netaddr (2.2MiB) 2026-04-17 05:09:27.195591 | orchestrator | Downloading ansible (54.5MiB) 2026-04-17 05:09:27.420704 | orchestrator | Downloaded netaddr 2026-04-17 05:09:27.469519 | orchestrator | Downloaded cryptography 2026-04-17 05:09:27.657742 | orchestrator | Downloaded ansible-core 2026-04-17 05:09:35.029388 | orchestrator | Downloaded ansible 2026-04-17 05:09:35.029641 | orchestrator | Prepared 11 packages in 7.97s 2026-04-17 05:09:35.625649 | orchestrator | Installed 11 packages in 594ms 2026-04-17 05:09:35.625733 | orchestrator | + ansible==11.11.0 2026-04-17 05:09:35.625744 | orchestrator | + ansible-core==2.18.15 2026-04-17 05:09:35.625753 | orchestrator | + cffi==2.0.0 2026-04-17 05:09:35.625761 | orchestrator | + cryptography==46.0.7 2026-04-17 05:09:35.625769 | orchestrator | + jinja2==3.1.6 2026-04-17 05:09:35.625776 | orchestrator | + markupsafe==3.0.3 2026-04-17 05:09:35.625784 | orchestrator | + netaddr==1.3.0 2026-04-17 05:09:35.625791 | orchestrator | + packaging==26.1 2026-04-17 05:09:35.625801 | orchestrator | + pycparser==3.0 2026-04-17 05:09:35.625808 | orchestrator | + pyyaml==6.0.3 2026-04-17 05:09:35.626057 | orchestrator | + resolvelib==1.0.1 2026-04-17 05:09:36.743885 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-198463n7s10aab/tmpgwf8skn9/ansible-collection-servicesbeff8m8o'... 2026-04-17 05:09:38.322740 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-17 05:09:38.322842 | orchestrator | Already on 'main' 2026-04-17 05:09:38.788355 | orchestrator | Starting galaxy collection install process 2026-04-17 05:09:38.788450 | orchestrator | Process install dependency map 2026-04-17 05:09:38.788465 | orchestrator | Starting collection install process 2026-04-17 05:09:38.788476 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-04-17 05:09:38.788488 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-04-17 05:09:38.788498 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-17 05:09:39.335202 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-198488b4im_eqn/tmpokqst04l/ansible-playbooks-managersm5syug7'... 2026-04-17 05:09:40.188714 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-17 05:09:40.188833 | orchestrator | Already on 'main' 2026-04-17 05:09:40.486208 | orchestrator | Starting galaxy collection install process 2026-04-17 05:09:40.486309 | orchestrator | Process install dependency map 2026-04-17 05:09:40.486326 | orchestrator | Starting collection install process 2026-04-17 05:09:40.486338 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-04-17 05:09:40.486351 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-04-17 05:09:40.486363 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-04-17 05:09:41.243934 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-17 05:09:41.244110 | orchestrator | -vvvv to see details 2026-04-17 05:09:41.706535 | orchestrator | 2026-04-17 05:09:41.706641 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-04-17 05:09:41.706657 | orchestrator | 2026-04-17 05:09:41.706692 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 05:09:46.004793 | orchestrator | ok: [testbed-manager] 2026-04-17 05:09:46.004894 | orchestrator | 2026-04-17 05:09:46.004911 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-17 05:09:46.083782 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 05:09:46.083900 | orchestrator | 2026-04-17 05:09:46.083924 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-17 05:09:48.111158 | orchestrator | ok: [testbed-manager] 2026-04-17 05:09:48.111259 | orchestrator | 2026-04-17 05:09:48.111274 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-17 05:09:48.162068 | orchestrator | ok: [testbed-manager] 2026-04-17 05:09:48.162141 | orchestrator | 2026-04-17 05:09:48.162149 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-17 05:09:48.231703 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-17 05:09:48.231797 | orchestrator | 2026-04-17 05:09:48.231813 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-17 05:09:52.767371 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-04-17 05:09:52.767481 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-04-17 05:09:52.767496 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-17 05:09:52.767520 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-04-17 05:09:52.767532 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-17 05:09:52.767543 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-17 05:09:52.767554 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-17 05:09:52.767565 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-04-17 05:09:52.767577 | orchestrator | 2026-04-17 05:09:52.767589 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-17 05:09:53.923949 | orchestrator | ok: [testbed-manager] 2026-04-17 05:09:53.924096 | orchestrator | 2026-04-17 05:09:53.924112 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-17 05:09:54.933070 | orchestrator | ok: [testbed-manager] 2026-04-17 05:09:54.933189 | orchestrator | 2026-04-17 05:09:54.933215 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-17 05:09:55.031302 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-17 05:09:55.031397 | orchestrator | 2026-04-17 05:09:55.031411 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-17 05:09:56.920676 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-04-17 05:09:56.920779 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-04-17 05:09:56.920793 | orchestrator | 2026-04-17 05:09:56.920805 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-17 05:09:57.927092 | orchestrator | ok: [testbed-manager] 2026-04-17 05:09:57.927209 | orchestrator | 2026-04-17 05:09:57.927227 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-17 05:09:58.011179 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:09:58.011279 | orchestrator | 2026-04-17 05:09:58.011295 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-17 05:09:58.103374 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-17 05:09:58.103443 | orchestrator | 2026-04-17 05:09:58.103450 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-17 05:09:59.130469 | orchestrator | ok: [testbed-manager] 2026-04-17 05:09:59.130570 | orchestrator | 2026-04-17 05:09:59.130587 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-17 05:09:59.228188 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-17 05:09:59.228285 | orchestrator | 2026-04-17 05:09:59.228303 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-17 05:10:01.229314 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-17 05:10:01.229381 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-17 05:10:01.229387 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:01.229392 | orchestrator | 2026-04-17 05:10:01.229397 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-17 05:10:02.193681 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:02.193762 | orchestrator | 2026-04-17 05:10:02.193772 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-17 05:10:02.266875 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:10:02.266977 | orchestrator | 2026-04-17 05:10:02.267057 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-17 05:10:02.370168 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-17 05:10:02.370263 | orchestrator | 2026-04-17 05:10:02.370278 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-17 05:10:03.065835 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:03.065961 | orchestrator | 2026-04-17 05:10:03.066738 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-17 05:10:03.628224 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:03.628388 | orchestrator | 2026-04-17 05:10:03.628408 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-17 05:10:05.550748 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-04-17 05:10:05.550852 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-04-17 05:10:05.550868 | orchestrator | 2026-04-17 05:10:05.550881 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-17 05:10:06.766066 | orchestrator | changed: [testbed-manager] 2026-04-17 05:10:06.766165 | orchestrator | 2026-04-17 05:10:06.766182 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-17 05:10:07.362349 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:07.362436 | orchestrator | 2026-04-17 05:10:07.362451 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-17 05:10:07.899403 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:07.899475 | orchestrator | 2026-04-17 05:10:07.899487 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-17 05:10:07.953979 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:10:07.954147 | orchestrator | 2026-04-17 05:10:07.954164 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-17 05:10:08.040580 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-17 05:10:08.040661 | orchestrator | 2026-04-17 05:10:08.040678 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-17 05:10:08.098857 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:08.098954 | orchestrator | 2026-04-17 05:10:08.098969 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-17 05:10:11.113866 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-04-17 05:10:11.113953 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-04-17 05:10:11.113967 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-04-17 05:10:11.113981 | orchestrator | 2026-04-17 05:10:11.114112 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-17 05:10:12.153585 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:12.153662 | orchestrator | 2026-04-17 05:10:12.153675 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-17 05:10:13.173456 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:13.173561 | orchestrator | 2026-04-17 05:10:13.173577 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-17 05:10:14.167495 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:14.167604 | orchestrator | 2026-04-17 05:10:14.167621 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-17 05:10:14.242894 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-17 05:10:14.242988 | orchestrator | 2026-04-17 05:10:14.243061 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-17 05:10:14.314246 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:14.314363 | orchestrator | 2026-04-17 05:10:14.314388 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-17 05:10:15.350706 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-04-17 05:10:15.350809 | orchestrator | 2026-04-17 05:10:15.350826 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-17 05:10:15.443225 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-17 05:10:15.443311 | orchestrator | 2026-04-17 05:10:15.443330 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-17 05:10:16.436158 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:16.436260 | orchestrator | 2026-04-17 05:10:16.436276 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-17 05:10:17.624663 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:17.624793 | orchestrator | 2026-04-17 05:10:17.624810 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-17 05:10:17.696271 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:10:17.696361 | orchestrator | 2026-04-17 05:10:17.696375 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-17 05:10:17.758746 | orchestrator | ok: [testbed-manager] 2026-04-17 05:10:17.758830 | orchestrator | 2026-04-17 05:10:17.758842 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-17 05:10:19.181534 | orchestrator | changed: [testbed-manager] 2026-04-17 05:10:19.181656 | orchestrator | 2026-04-17 05:10:19.182459 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-17 05:11:32.568233 | orchestrator | changed: [testbed-manager] 2026-04-17 05:11:32.568355 | orchestrator | 2026-04-17 05:11:32.568372 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-17 05:11:33.894778 | orchestrator | ok: [testbed-manager] 2026-04-17 05:11:33.894851 | orchestrator | 2026-04-17 05:11:33.894860 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-17 05:11:33.966780 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:11:33.966904 | orchestrator | 2026-04-17 05:11:33.966931 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-17 05:11:34.850264 | orchestrator | ok: [testbed-manager] 2026-04-17 05:11:34.850417 | orchestrator | 2026-04-17 05:11:34.850436 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-17 05:11:34.935384 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:11:34.935478 | orchestrator | 2026-04-17 05:11:34.935494 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-17 05:11:34.935507 | orchestrator | 2026-04-17 05:11:34.935518 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-17 05:11:54.043689 | orchestrator | changed: [testbed-manager] 2026-04-17 05:11:54.043799 | orchestrator | 2026-04-17 05:11:54.043816 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-17 05:12:54.115450 | orchestrator | Pausing for 60 seconds 2026-04-17 05:12:54.115567 | orchestrator | changed: [testbed-manager] 2026-04-17 05:12:54.115583 | orchestrator | 2026-04-17 05:12:54.115595 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-04-17 05:12:54.167244 | orchestrator | ok: [testbed-manager] 2026-04-17 05:12:54.167352 | orchestrator | 2026-04-17 05:12:54.167360 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-17 05:12:58.023418 | orchestrator | changed: [testbed-manager] 2026-04-17 05:12:58.023526 | orchestrator | 2026-04-17 05:12:58.023542 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-17 05:14:00.910994 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-17 05:14:00.911106 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-17 05:14:00.911121 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-17 05:14:00.911133 | orchestrator | changed: [testbed-manager] 2026-04-17 05:14:00.911145 | orchestrator | 2026-04-17 05:14:00.911155 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-17 05:14:07.339592 | orchestrator | changed: [testbed-manager] 2026-04-17 05:14:07.339692 | orchestrator | 2026-04-17 05:14:07.339704 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-17 05:14:07.432290 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-17 05:14:07.432398 | orchestrator | 2026-04-17 05:14:07.432443 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-17 05:14:07.432455 | orchestrator | 2026-04-17 05:14:07.432464 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-17 05:14:07.506122 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:14:07.506235 | orchestrator | 2026-04-17 05:14:07.506251 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-17 05:14:07.590687 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-17 05:14:07.590787 | orchestrator | 2026-04-17 05:14:07.590804 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-17 05:14:08.755928 | orchestrator | changed: [testbed-manager] 2026-04-17 05:14:08.756035 | orchestrator | 2026-04-17 05:14:08.756053 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-17 05:14:12.678090 | orchestrator | ok: [testbed-manager] 2026-04-17 05:14:12.678203 | orchestrator | 2026-04-17 05:14:12.678220 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-17 05:14:12.772205 | orchestrator | ok: [testbed-manager] => { 2026-04-17 05:14:12.772294 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-17 05:14:12.772308 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-17 05:14:12.772319 | orchestrator | "Checking running containers against expected versions...", 2026-04-17 05:14:12.772331 | orchestrator | "", 2026-04-17 05:14:12.772343 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-17 05:14:12.772354 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-17 05:14:12.772366 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772377 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-17 05:14:12.772388 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.772399 | orchestrator | "", 2026-04-17 05:14:12.772411 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-17 05:14:12.772470 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-17 05:14:12.772484 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772495 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-17 05:14:12.772506 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.772517 | orchestrator | "", 2026-04-17 05:14:12.772528 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-17 05:14:12.772539 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-17 05:14:12.772550 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772561 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-17 05:14:12.772572 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.772583 | orchestrator | "", 2026-04-17 05:14:12.772594 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-17 05:14:12.772606 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-17 05:14:12.772616 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772627 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-17 05:14:12.772638 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.772649 | orchestrator | "", 2026-04-17 05:14:12.772660 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-17 05:14:12.772671 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-17 05:14:12.772682 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772693 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-17 05:14:12.772704 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.772715 | orchestrator | "", 2026-04-17 05:14:12.772726 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-17 05:14:12.772762 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.772784 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772796 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.772807 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.772818 | orchestrator | "", 2026-04-17 05:14:12.772828 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-17 05:14:12.772839 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-17 05:14:12.772850 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772861 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-17 05:14:12.772872 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.772882 | orchestrator | "", 2026-04-17 05:14:12.772893 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-17 05:14:12.772904 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-17 05:14:12.772914 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772925 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-17 05:14:12.772936 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.772946 | orchestrator | "", 2026-04-17 05:14:12.772957 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-17 05:14:12.772968 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-17 05:14:12.772978 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.772989 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-17 05:14:12.773000 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.773016 | orchestrator | "", 2026-04-17 05:14:12.773026 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-17 05:14:12.773037 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-17 05:14:12.773048 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.773059 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-17 05:14:12.773070 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.773081 | orchestrator | "", 2026-04-17 05:14:12.773092 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-17 05:14:12.773102 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773113 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.773124 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773134 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.773145 | orchestrator | "", 2026-04-17 05:14:12.773156 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-17 05:14:12.773166 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773177 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.773188 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773198 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.773209 | orchestrator | "", 2026-04-17 05:14:12.773220 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-17 05:14:12.773230 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773241 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.773252 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773262 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.773273 | orchestrator | "", 2026-04-17 05:14:12.773283 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-17 05:14:12.773294 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773305 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.773315 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773343 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.773354 | orchestrator | "", 2026-04-17 05:14:12.773365 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-17 05:14:12.773376 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773395 | orchestrator | " Enabled: true", 2026-04-17 05:14:12.773406 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-17 05:14:12.773417 | orchestrator | " Status: ✅ MATCH", 2026-04-17 05:14:12.773450 | orchestrator | "", 2026-04-17 05:14:12.773461 | orchestrator | "=== Summary ===", 2026-04-17 05:14:12.773472 | orchestrator | "Errors (version mismatches): 0", 2026-04-17 05:14:12.773483 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-17 05:14:12.773493 | orchestrator | "", 2026-04-17 05:14:12.773504 | orchestrator | "✅ All running containers match expected versions!" 2026-04-17 05:14:12.773515 | orchestrator | ] 2026-04-17 05:14:12.773526 | orchestrator | } 2026-04-17 05:14:12.773537 | orchestrator | 2026-04-17 05:14:12.773548 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-17 05:14:12.845719 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:14:12.845813 | orchestrator | 2026-04-17 05:14:12.845827 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:14:12.845840 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-04-17 05:14:12.845851 | orchestrator | 2026-04-17 05:14:25.862204 | orchestrator | 2026-04-17 05:14:25 | INFO  | Task e801e0d5-6985-40e6-9e0a-683bf2a1016f (sync inventory) is running in background. Output coming soon. 2026-04-17 05:14:58.476814 | orchestrator | 2026-04-17 05:14:27 | INFO  | Starting group_vars file reorganization 2026-04-17 05:14:58.476929 | orchestrator | 2026-04-17 05:14:27 | INFO  | Moved 0 file(s) to their respective directories 2026-04-17 05:14:58.476948 | orchestrator | 2026-04-17 05:14:27 | INFO  | Group_vars file reorganization completed 2026-04-17 05:14:58.476960 | orchestrator | 2026-04-17 05:14:30 | INFO  | Starting variable preparation from inventory 2026-04-17 05:14:58.476972 | orchestrator | 2026-04-17 05:14:33 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-17 05:14:58.476983 | orchestrator | 2026-04-17 05:14:33 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-17 05:14:58.476994 | orchestrator | 2026-04-17 05:14:33 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-17 05:14:58.477005 | orchestrator | 2026-04-17 05:14:33 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-17 05:14:58.477015 | orchestrator | 2026-04-17 05:14:33 | INFO  | Variable preparation completed 2026-04-17 05:14:58.477026 | orchestrator | 2026-04-17 05:14:35 | INFO  | Starting inventory overwrite handling 2026-04-17 05:14:58.477037 | orchestrator | 2026-04-17 05:14:35 | INFO  | Handling group overwrites in 99-overwrite 2026-04-17 05:14:58.477048 | orchestrator | 2026-04-17 05:14:35 | INFO  | Removing group frr:children from 60-generic 2026-04-17 05:14:58.477059 | orchestrator | 2026-04-17 05:14:35 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-17 05:14:58.477069 | orchestrator | 2026-04-17 05:14:35 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-17 05:14:58.477080 | orchestrator | 2026-04-17 05:14:35 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-17 05:14:58.477091 | orchestrator | 2026-04-17 05:14:35 | INFO  | Handling group overwrites in 20-roles 2026-04-17 05:14:58.477102 | orchestrator | 2026-04-17 05:14:35 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-17 05:14:58.477112 | orchestrator | 2026-04-17 05:14:35 | INFO  | Removed 5 group(s) in total 2026-04-17 05:14:58.477123 | orchestrator | 2026-04-17 05:14:35 | INFO  | Inventory overwrite handling completed 2026-04-17 05:14:58.477133 | orchestrator | 2026-04-17 05:14:36 | INFO  | Starting merge of inventory files 2026-04-17 05:14:58.477144 | orchestrator | 2026-04-17 05:14:36 | INFO  | Inventory files merged successfully 2026-04-17 05:14:58.477182 | orchestrator | 2026-04-17 05:14:41 | INFO  | Generating minified hosts file 2026-04-17 05:14:58.477194 | orchestrator | 2026-04-17 05:14:42 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-17 05:14:58.477217 | orchestrator | 2026-04-17 05:14:42 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-17 05:14:58.477228 | orchestrator | 2026-04-17 05:14:44 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-17 05:14:58.477239 | orchestrator | 2026-04-17 05:14:56 | INFO  | Successfully wrote ClusterShell configuration 2026-04-17 05:14:58.711197 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 05:14:58.711312 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-17 05:14:58.711330 | orchestrator | + local max_attempts=60 2026-04-17 05:14:58.711344 | orchestrator | + local name=kolla-ansible 2026-04-17 05:14:58.711355 | orchestrator | + local attempt_num=1 2026-04-17 05:14:58.711579 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-17 05:14:58.744280 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 05:14:58.744358 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-17 05:14:58.744433 | orchestrator | + local max_attempts=60 2026-04-17 05:14:58.744456 | orchestrator | + local name=osism-ansible 2026-04-17 05:14:58.744475 | orchestrator | + local attempt_num=1 2026-04-17 05:14:58.745185 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-17 05:14:58.780563 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 05:14:58.780651 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-17 05:14:58.996636 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-17 05:14:58.996742 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-17 05:14:58.996760 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-17 05:14:58.996772 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-17 05:14:58.996804 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-04-17 05:14:58.996815 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-04-17 05:14:58.996825 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-04-17 05:14:58.996836 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-04-17 05:14:58.996847 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 39 seconds ago 2026-04-17 05:14:58.996858 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-04-17 05:14:58.996868 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-04-17 05:14:58.996903 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-04-17 05:14:58.996914 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-17 05:14:58.996925 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-04-17 05:14:58.996936 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-04-17 05:14:58.996946 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-04-17 05:14:59.003086 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-04-17 05:14:59.003132 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-04-17 05:14:59.003146 | orchestrator | + osism apply facts 2026-04-17 05:15:10.463117 | orchestrator | 2026-04-17 05:15:10 | INFO  | Prepare task for execution of facts. 2026-04-17 05:15:10.553473 | orchestrator | 2026-04-17 05:15:10 | INFO  | Task 17e733ec-2cf7-4ce0-ba13-219fc028f239 (facts) was prepared for execution. 2026-04-17 05:15:10.553635 | orchestrator | 2026-04-17 05:15:10 | INFO  | It takes a moment until task 17e733ec-2cf7-4ce0-ba13-219fc028f239 (facts) has been started and output is visible here. 2026-04-17 05:15:35.777389 | orchestrator | 2026-04-17 05:15:35.777509 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-17 05:15:35.777526 | orchestrator | 2026-04-17 05:15:35.777539 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 05:15:35.777550 | orchestrator | Friday 17 April 2026 05:15:16 +0000 (0:00:02.123) 0:00:02.123 ********** 2026-04-17 05:15:35.777622 | orchestrator | ok: [testbed-manager] 2026-04-17 05:15:35.777637 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:15:35.777648 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:15:35.777659 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:15:35.777670 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:15:35.777680 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:15:35.777691 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:15:35.777702 | orchestrator | 2026-04-17 05:15:35.777713 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 05:15:35.777724 | orchestrator | Friday 17 April 2026 05:15:20 +0000 (0:00:04.078) 0:00:06.202 ********** 2026-04-17 05:15:35.777735 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:15:35.777747 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:15:35.777758 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:15:35.777769 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:15:35.777779 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:15:35.777790 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:15:35.777801 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:15:35.777812 | orchestrator | 2026-04-17 05:15:35.777823 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 05:15:35.777834 | orchestrator | 2026-04-17 05:15:35.777845 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 05:15:35.777856 | orchestrator | Friday 17 April 2026 05:15:24 +0000 (0:00:03.586) 0:00:09.788 ********** 2026-04-17 05:15:35.777867 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:15:35.777877 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:15:35.777888 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:15:35.777899 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:15:35.777912 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:15:35.777953 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:15:35.777966 | orchestrator | ok: [testbed-manager] 2026-04-17 05:15:35.777978 | orchestrator | 2026-04-17 05:15:35.777990 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 05:15:35.778002 | orchestrator | 2026-04-17 05:15:35.778086 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 05:15:35.778101 | orchestrator | Friday 17 April 2026 05:15:31 +0000 (0:00:07.834) 0:00:17.623 ********** 2026-04-17 05:15:35.778114 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:15:35.778126 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:15:35.778138 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:15:35.778150 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:15:35.778163 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:15:35.778175 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:15:35.778187 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:15:35.778204 | orchestrator | 2026-04-17 05:15:35.778222 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:15:35.778242 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:15:35.778262 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:15:35.778280 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:15:35.778299 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:15:35.778313 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:15:35.778324 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:15:35.778335 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 05:15:35.778345 | orchestrator | 2026-04-17 05:15:35.778357 | orchestrator | 2026-04-17 05:15:35.778368 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:15:35.778378 | orchestrator | Friday 17 April 2026 05:15:35 +0000 (0:00:03.399) 0:00:21.023 ********** 2026-04-17 05:15:35.778389 | orchestrator | =============================================================================== 2026-04-17 05:15:35.778400 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.83s 2026-04-17 05:15:35.778410 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 4.08s 2026-04-17 05:15:35.778421 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 3.59s 2026-04-17 05:15:35.778432 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.40s 2026-04-17 05:15:36.000436 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-17 05:15:36.001174 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-17 05:15:36.101307 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 05:15:36.102265 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-17 05:15:36.142277 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-04-17 05:15:36.142350 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-04-17 05:15:36.148346 | orchestrator | + set -e 2026-04-17 05:15:36.148396 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-04-17 05:15:36.148406 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-17 05:15:36.157135 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-04-17 05:15:36.167679 | orchestrator | 2026-04-17 05:15:36.167725 | orchestrator | # UPGRADE SERVICES 2026-04-17 05:15:36.167759 | orchestrator | 2026-04-17 05:15:36.167768 | orchestrator | + set -e 2026-04-17 05:15:36.167777 | orchestrator | + echo 2026-04-17 05:15:36.167786 | orchestrator | + echo '# UPGRADE SERVICES' 2026-04-17 05:15:36.167795 | orchestrator | + echo 2026-04-17 05:15:36.167803 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 05:15:36.168864 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 05:15:36.168883 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 05:15:36.168891 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 05:15:36.168900 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 05:15:36.168936 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 05:15:36.168948 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 05:15:36.168956 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 05:15:36.168965 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 05:15:36.168974 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 05:15:36.168982 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 05:15:36.168991 | orchestrator | ++ export ARA=false 2026-04-17 05:15:36.168999 | orchestrator | ++ ARA=false 2026-04-17 05:15:36.169008 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 05:15:36.169016 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 05:15:36.169025 | orchestrator | ++ export TEMPEST=false 2026-04-17 05:15:36.169033 | orchestrator | ++ TEMPEST=false 2026-04-17 05:15:36.169042 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 05:15:36.169050 | orchestrator | ++ IS_ZUUL=true 2026-04-17 05:15:36.169058 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 05:15:36.169067 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 05:15:36.169075 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 05:15:36.169084 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 05:15:36.169092 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 05:15:36.169101 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 05:15:36.169109 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 05:15:36.169118 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 05:15:36.169127 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 05:15:36.169135 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 05:15:36.169145 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-17 05:15:36.169153 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-17 05:15:36.169162 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-04-17 05:15:36.169217 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-04-17 05:15:36.169229 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-17 05:15:36.178820 | orchestrator | + set -e 2026-04-17 05:15:36.178863 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 05:15:36.180514 | orchestrator | 2026-04-17 05:15:36.180537 | orchestrator | # PULL IMAGES 2026-04-17 05:15:36.180548 | orchestrator | 2026-04-17 05:15:36.180559 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 05:15:36.180592 | orchestrator | ++ INTERACTIVE=false 2026-04-17 05:15:36.180603 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 05:15:36.180613 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 05:15:36.180624 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 05:15:36.180634 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 05:15:36.180645 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 05:15:36.180656 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 05:15:36.180667 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 05:15:36.180677 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 05:15:36.180688 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 05:15:36.180699 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 05:15:36.180710 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 05:15:36.180721 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 05:15:36.180731 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 05:15:36.180742 | orchestrator | ++ export ARA=false 2026-04-17 05:15:36.180753 | orchestrator | ++ ARA=false 2026-04-17 05:15:36.180764 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 05:15:36.180774 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 05:15:36.180785 | orchestrator | ++ export TEMPEST=false 2026-04-17 05:15:36.180796 | orchestrator | ++ TEMPEST=false 2026-04-17 05:15:36.180807 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 05:15:36.180818 | orchestrator | ++ IS_ZUUL=true 2026-04-17 05:15:36.180829 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 05:15:36.180839 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 05:15:36.180850 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 05:15:36.180861 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 05:15:36.180871 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 05:15:36.180882 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 05:15:36.180892 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 05:15:36.180903 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 05:15:36.180938 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 05:15:36.180949 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 05:15:36.180960 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-17 05:15:36.180971 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-17 05:15:36.180981 | orchestrator | + echo 2026-04-17 05:15:36.180992 | orchestrator | + echo '# PULL IMAGES' 2026-04-17 05:15:36.181003 | orchestrator | + echo 2026-04-17 05:15:36.181239 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-17 05:15:36.254118 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 05:15:36.254201 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-17 05:15:37.619873 | orchestrator | 2026-04-17 05:15:37 | INFO  | Trying to run play pull-images in environment custom 2026-04-17 05:15:47.658234 | orchestrator | 2026-04-17 05:15:47 | INFO  | Prepare task for execution of pull-images. 2026-04-17 05:15:47.751835 | orchestrator | 2026-04-17 05:15:47 | INFO  | Task 2503aa83-778f-4982-b172-f3f8dd4d5791 (pull-images) was prepared for execution. 2026-04-17 05:15:47.751927 | orchestrator | 2026-04-17 05:15:47 | INFO  | Task 2503aa83-778f-4982-b172-f3f8dd4d5791 is running in background. No more output. Check ARA for logs. 2026-04-17 05:15:48.007183 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-04-17 05:15:48.021647 | orchestrator | + set -e 2026-04-17 05:15:48.021700 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 05:15:48.021715 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 05:15:48.021728 | orchestrator | ++ INTERACTIVE=false 2026-04-17 05:15:48.021739 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 05:15:48.021750 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 05:15:48.021876 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 05:15:48.025013 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 05:15:48.040403 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-17 05:15:48.040496 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-17 05:15:48.041156 | orchestrator | ++ semver 10.0.0 8.0.3 2026-04-17 05:15:48.102006 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 05:15:48.102166 | orchestrator | + osism apply frr 2026-04-17 05:15:59.597861 | orchestrator | 2026-04-17 05:15:59 | INFO  | Prepare task for execution of frr. 2026-04-17 05:15:59.689070 | orchestrator | 2026-04-17 05:15:59 | INFO  | Task 02032f4d-8547-47ab-98e8-2b659add3927 (frr) was prepared for execution. 2026-04-17 05:15:59.689162 | orchestrator | 2026-04-17 05:15:59 | INFO  | It takes a moment until task 02032f4d-8547-47ab-98e8-2b659add3927 (frr) has been started and output is visible here. 2026-04-17 05:16:35.752428 | orchestrator | 2026-04-17 05:16:35.752578 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-17 05:16:35.752596 | orchestrator | 2026-04-17 05:16:35.752609 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-17 05:16:35.752620 | orchestrator | Friday 17 April 2026 05:16:07 +0000 (0:00:03.775) 0:00:03.775 ********** 2026-04-17 05:16:35.752631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 05:16:35.752644 | orchestrator | 2026-04-17 05:16:35.752655 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-17 05:16:35.752695 | orchestrator | Friday 17 April 2026 05:16:09 +0000 (0:00:02.152) 0:00:05.928 ********** 2026-04-17 05:16:35.752708 | orchestrator | ok: [testbed-manager] 2026-04-17 05:16:35.752720 | orchestrator | 2026-04-17 05:16:35.752732 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-17 05:16:35.752754 | orchestrator | Friday 17 April 2026 05:16:11 +0000 (0:00:02.583) 0:00:08.512 ********** 2026-04-17 05:16:35.752766 | orchestrator | ok: [testbed-manager] 2026-04-17 05:16:35.752777 | orchestrator | 2026-04-17 05:16:35.752788 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-17 05:16:35.752799 | orchestrator | Friday 17 April 2026 05:16:14 +0000 (0:00:02.902) 0:00:11.414 ********** 2026-04-17 05:16:35.752810 | orchestrator | ok: [testbed-manager] 2026-04-17 05:16:35.752821 | orchestrator | 2026-04-17 05:16:35.752831 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-17 05:16:35.752865 | orchestrator | Friday 17 April 2026 05:16:16 +0000 (0:00:01.941) 0:00:13.356 ********** 2026-04-17 05:16:35.752876 | orchestrator | ok: [testbed-manager] 2026-04-17 05:16:35.752887 | orchestrator | 2026-04-17 05:16:35.752900 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-17 05:16:35.752912 | orchestrator | Friday 17 April 2026 05:16:18 +0000 (0:00:02.014) 0:00:15.370 ********** 2026-04-17 05:16:35.752925 | orchestrator | ok: [testbed-manager] 2026-04-17 05:16:35.752937 | orchestrator | 2026-04-17 05:16:35.752955 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-17 05:16:35.752967 | orchestrator | Friday 17 April 2026 05:16:21 +0000 (0:00:02.507) 0:00:17.878 ********** 2026-04-17 05:16:35.752980 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:16:35.752993 | orchestrator | 2026-04-17 05:16:35.753006 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-17 05:16:35.753019 | orchestrator | Friday 17 April 2026 05:16:22 +0000 (0:00:01.250) 0:00:19.129 ********** 2026-04-17 05:16:35.753031 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:16:35.753044 | orchestrator | 2026-04-17 05:16:35.753058 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-17 05:16:35.753076 | orchestrator | Friday 17 April 2026 05:16:23 +0000 (0:00:01.238) 0:00:20.368 ********** 2026-04-17 05:16:35.753103 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:16:35.753123 | orchestrator | 2026-04-17 05:16:35.753142 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-17 05:16:35.753160 | orchestrator | Friday 17 April 2026 05:16:25 +0000 (0:00:01.233) 0:00:21.602 ********** 2026-04-17 05:16:35.753178 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:16:35.753196 | orchestrator | 2026-04-17 05:16:35.753214 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-17 05:16:35.753233 | orchestrator | Friday 17 April 2026 05:16:26 +0000 (0:00:01.198) 0:00:22.800 ********** 2026-04-17 05:16:35.753319 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:16:35.753335 | orchestrator | 2026-04-17 05:16:35.753346 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-17 05:16:35.753356 | orchestrator | Friday 17 April 2026 05:16:27 +0000 (0:00:01.184) 0:00:23.984 ********** 2026-04-17 05:16:35.753367 | orchestrator | ok: [testbed-manager] 2026-04-17 05:16:35.753377 | orchestrator | 2026-04-17 05:16:35.753388 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-17 05:16:35.753398 | orchestrator | Friday 17 April 2026 05:16:29 +0000 (0:00:02.064) 0:00:26.048 ********** 2026-04-17 05:16:35.753409 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-17 05:16:35.753419 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-17 05:16:35.753431 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-17 05:16:35.753442 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-17 05:16:35.753452 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-17 05:16:35.753463 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-17 05:16:35.753474 | orchestrator | 2026-04-17 05:16:35.753484 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-17 05:16:35.753495 | orchestrator | Friday 17 April 2026 05:16:32 +0000 (0:00:03.432) 0:00:29.480 ********** 2026-04-17 05:16:35.753505 | orchestrator | ok: [testbed-manager] 2026-04-17 05:16:35.753516 | orchestrator | 2026-04-17 05:16:35.753527 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:16:35.753538 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 05:16:35.753561 | orchestrator | 2026-04-17 05:16:35.753572 | orchestrator | 2026-04-17 05:16:35.753582 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:16:35.753593 | orchestrator | Friday 17 April 2026 05:16:35 +0000 (0:00:02.492) 0:00:31.973 ********** 2026-04-17 05:16:35.753604 | orchestrator | =============================================================================== 2026-04-17 05:16:35.753634 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.43s 2026-04-17 05:16:35.753645 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.90s 2026-04-17 05:16:35.753656 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.58s 2026-04-17 05:16:35.753695 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.51s 2026-04-17 05:16:35.753706 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.49s 2026-04-17 05:16:35.753716 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.15s 2026-04-17 05:16:35.753727 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.06s 2026-04-17 05:16:35.753737 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 2.01s 2026-04-17 05:16:35.753747 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.94s 2026-04-17 05:16:35.753758 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 1.25s 2026-04-17 05:16:35.753768 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 1.24s 2026-04-17 05:16:35.753779 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 1.23s 2026-04-17 05:16:35.753789 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.20s 2026-04-17 05:16:35.753800 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.18s 2026-04-17 05:16:35.975171 | orchestrator | + osism apply kubernetes 2026-04-17 05:16:37.366596 | orchestrator | 2026-04-17 05:16:37 | INFO  | Prepare task for execution of kubernetes. 2026-04-17 05:16:37.438099 | orchestrator | 2026-04-17 05:16:37 | INFO  | Task d89f48ff-37ff-4cbe-888e-d6df118c4a3d (kubernetes) was prepared for execution. 2026-04-17 05:16:37.438199 | orchestrator | 2026-04-17 05:16:37 | INFO  | It takes a moment until task d89f48ff-37ff-4cbe-888e-d6df118c4a3d (kubernetes) has been started and output is visible here. 2026-04-17 05:17:22.618666 | orchestrator | 2026-04-17 05:17:22.618855 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-17 05:17:22.618876 | orchestrator | 2026-04-17 05:17:22.618888 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-17 05:17:22.618901 | orchestrator | Friday 17 April 2026 05:16:43 +0000 (0:00:02.225) 0:00:02.225 ********** 2026-04-17 05:17:22.618913 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:17:22.618925 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:17:22.618948 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:17:22.618960 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:17:22.618970 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:17:22.618981 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:17:22.618992 | orchestrator | 2026-04-17 05:17:22.619003 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-17 05:17:22.619014 | orchestrator | Friday 17 April 2026 05:16:47 +0000 (0:00:04.540) 0:00:06.765 ********** 2026-04-17 05:17:22.619024 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.619036 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.619047 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.619058 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.619069 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.619079 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.619090 | orchestrator | 2026-04-17 05:17:22.619101 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-17 05:17:22.619133 | orchestrator | Friday 17 April 2026 05:16:50 +0000 (0:00:02.087) 0:00:08.853 ********** 2026-04-17 05:17:22.619146 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.619158 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.619170 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.619182 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.619194 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.619206 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.619266 | orchestrator | 2026-04-17 05:17:22.619280 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-17 05:17:22.619293 | orchestrator | Friday 17 April 2026 05:16:52 +0000 (0:00:02.014) 0:00:10.868 ********** 2026-04-17 05:17:22.619304 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:17:22.619317 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:17:22.619329 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:17:22.619341 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:17:22.619354 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:17:22.619367 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:17:22.619379 | orchestrator | 2026-04-17 05:17:22.619391 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-17 05:17:22.619404 | orchestrator | Friday 17 April 2026 05:16:54 +0000 (0:00:02.711) 0:00:13.579 ********** 2026-04-17 05:17:22.619416 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:17:22.619428 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:17:22.619440 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:17:22.619452 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:17:22.619464 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:17:22.619476 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:17:22.619489 | orchestrator | 2026-04-17 05:17:22.619501 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-17 05:17:22.619513 | orchestrator | Friday 17 April 2026 05:16:56 +0000 (0:00:02.152) 0:00:15.732 ********** 2026-04-17 05:17:22.619525 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:17:22.619536 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:17:22.619546 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:17:22.619557 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:17:22.619567 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:17:22.619578 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:17:22.619588 | orchestrator | 2026-04-17 05:17:22.619599 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-17 05:17:22.619610 | orchestrator | Friday 17 April 2026 05:17:00 +0000 (0:00:03.073) 0:00:18.805 ********** 2026-04-17 05:17:22.619620 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.619631 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.619642 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.619652 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.619663 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.619674 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.619684 | orchestrator | 2026-04-17 05:17:22.619696 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-17 05:17:22.619706 | orchestrator | Friday 17 April 2026 05:17:03 +0000 (0:00:03.206) 0:00:22.012 ********** 2026-04-17 05:17:22.619717 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.619728 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.619759 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.619770 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.619781 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.619792 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.619802 | orchestrator | 2026-04-17 05:17:22.619813 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-17 05:17:22.619824 | orchestrator | Friday 17 April 2026 05:17:05 +0000 (0:00:02.087) 0:00:24.100 ********** 2026-04-17 05:17:22.619834 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 05:17:22.619845 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 05:17:22.619883 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.619895 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 05:17:22.619906 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 05:17:22.619916 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.619927 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 05:17:22.619938 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 05:17:22.619948 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.619959 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 05:17:22.619969 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 05:17:22.619980 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.620011 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 05:17:22.620022 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 05:17:22.620033 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.620044 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 05:17:22.620055 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 05:17:22.620065 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.620076 | orchestrator | 2026-04-17 05:17:22.620086 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-17 05:17:22.620097 | orchestrator | Friday 17 April 2026 05:17:07 +0000 (0:00:01.836) 0:00:25.936 ********** 2026-04-17 05:17:22.620107 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.620118 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.620128 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.620139 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.620150 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.620160 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.620171 | orchestrator | 2026-04-17 05:17:22.620181 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-17 05:17:22.620193 | orchestrator | Friday 17 April 2026 05:17:09 +0000 (0:00:02.127) 0:00:28.064 ********** 2026-04-17 05:17:22.620204 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:17:22.620214 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:17:22.620225 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:17:22.620235 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:17:22.620246 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:17:22.620256 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:17:22.620267 | orchestrator | 2026-04-17 05:17:22.620277 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-17 05:17:22.620288 | orchestrator | Friday 17 April 2026 05:17:11 +0000 (0:00:01.946) 0:00:30.011 ********** 2026-04-17 05:17:22.620298 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:17:22.620309 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:17:22.620319 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:17:22.620329 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:17:22.620340 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:17:22.620350 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:17:22.620361 | orchestrator | 2026-04-17 05:17:22.620371 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-17 05:17:22.620382 | orchestrator | Friday 17 April 2026 05:17:14 +0000 (0:00:02.802) 0:00:32.813 ********** 2026-04-17 05:17:22.620393 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.620408 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.620419 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.620430 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.620440 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.620457 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.620468 | orchestrator | 2026-04-17 05:17:22.620479 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-17 05:17:22.620489 | orchestrator | Friday 17 April 2026 05:17:15 +0000 (0:00:01.936) 0:00:34.750 ********** 2026-04-17 05:17:22.620506 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.620517 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.620528 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.620538 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.620549 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.620559 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.620570 | orchestrator | 2026-04-17 05:17:22.620581 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-17 05:17:22.620593 | orchestrator | Friday 17 April 2026 05:17:18 +0000 (0:00:02.196) 0:00:36.947 ********** 2026-04-17 05:17:22.620604 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.620614 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.620624 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.620635 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.620645 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.620656 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.620666 | orchestrator | 2026-04-17 05:17:22.620677 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-17 05:17:22.620687 | orchestrator | Friday 17 April 2026 05:17:20 +0000 (0:00:02.071) 0:00:39.018 ********** 2026-04-17 05:17:22.620698 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-17 05:17:22.620709 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-17 05:17:22.620720 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.620730 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-17 05:17:22.620761 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-17 05:17:22.620772 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.620783 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-17 05:17:22.620793 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-17 05:17:22.620804 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:17:22.620815 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-17 05:17:22.620825 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-17 05:17:22.620836 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:17:22.620846 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-17 05:17:22.620857 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-17 05:17:22.620867 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:17:22.620878 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-17 05:17:22.620888 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-17 05:17:22.620899 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:17:22.620909 | orchestrator | 2026-04-17 05:17:22.620920 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-17 05:17:22.620936 | orchestrator | Friday 17 April 2026 05:17:22 +0000 (0:00:01.835) 0:00:40.854 ********** 2026-04-17 05:17:22.620947 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:17:22.620958 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:17:22.620975 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:19:08.970769 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:19:08.970896 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.970920 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.971012 | orchestrator | 2026-04-17 05:19:08.971037 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-17 05:19:08.971058 | orchestrator | Friday 17 April 2026 05:17:24 +0000 (0:00:02.171) 0:00:43.025 ********** 2026-04-17 05:19:08.971077 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:19:08.971121 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:19:08.971133 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:19:08.971144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:19:08.971154 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.971165 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.971175 | orchestrator | 2026-04-17 05:19:08.971186 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-17 05:19:08.971197 | orchestrator | 2026-04-17 05:19:08.971208 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-17 05:19:08.971221 | orchestrator | Friday 17 April 2026 05:17:27 +0000 (0:00:03.586) 0:00:46.612 ********** 2026-04-17 05:19:08.971234 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.971247 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.971259 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.971273 | orchestrator | 2026-04-17 05:19:08.971285 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-17 05:19:08.971298 | orchestrator | Friday 17 April 2026 05:17:31 +0000 (0:00:03.190) 0:00:49.802 ********** 2026-04-17 05:19:08.971311 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.971323 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.971333 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.971344 | orchestrator | 2026-04-17 05:19:08.971354 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-17 05:19:08.971365 | orchestrator | Friday 17 April 2026 05:17:33 +0000 (0:00:02.117) 0:00:51.920 ********** 2026-04-17 05:19:08.971376 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:08.971387 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:19:08.971398 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:19:08.971409 | orchestrator | 2026-04-17 05:19:08.971419 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-17 05:19:08.971430 | orchestrator | Friday 17 April 2026 05:17:35 +0000 (0:00:01.969) 0:00:53.890 ********** 2026-04-17 05:19:08.971440 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.971451 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.971461 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.971472 | orchestrator | 2026-04-17 05:19:08.971482 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-17 05:19:08.971493 | orchestrator | Friday 17 April 2026 05:17:36 +0000 (0:00:01.690) 0:00:55.581 ********** 2026-04-17 05:19:08.971503 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:19:08.971514 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.971524 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.971535 | orchestrator | 2026-04-17 05:19:08.971545 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-17 05:19:08.971556 | orchestrator | Friday 17 April 2026 05:17:38 +0000 (0:00:01.383) 0:00:56.965 ********** 2026-04-17 05:19:08.971566 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.971577 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.971587 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.971597 | orchestrator | 2026-04-17 05:19:08.971608 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-17 05:19:08.971618 | orchestrator | Friday 17 April 2026 05:17:40 +0000 (0:00:01.988) 0:00:58.953 ********** 2026-04-17 05:19:08.971629 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.971639 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.971650 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.971660 | orchestrator | 2026-04-17 05:19:08.971671 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-17 05:19:08.971681 | orchestrator | Friday 17 April 2026 05:17:42 +0000 (0:00:02.169) 0:01:01.123 ********** 2026-04-17 05:19:08.971692 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:19:08.971702 | orchestrator | 2026-04-17 05:19:08.971713 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-17 05:19:08.971731 | orchestrator | Friday 17 April 2026 05:17:44 +0000 (0:00:01.842) 0:01:02.966 ********** 2026-04-17 05:19:08.971742 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.971752 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.971763 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.971773 | orchestrator | 2026-04-17 05:19:08.971784 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-17 05:19:08.971794 | orchestrator | Friday 17 April 2026 05:17:46 +0000 (0:00:02.751) 0:01:05.718 ********** 2026-04-17 05:19:08.971805 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.971815 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.971826 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.971836 | orchestrator | 2026-04-17 05:19:08.971847 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-17 05:19:08.971857 | orchestrator | Friday 17 April 2026 05:17:48 +0000 (0:00:01.575) 0:01:07.293 ********** 2026-04-17 05:19:08.971868 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.971878 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.971889 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:08.971899 | orchestrator | 2026-04-17 05:19:08.971910 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-17 05:19:08.971920 | orchestrator | Friday 17 April 2026 05:17:50 +0000 (0:00:01.808) 0:01:09.102 ********** 2026-04-17 05:19:08.971931 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.971964 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.971976 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:08.971986 | orchestrator | 2026-04-17 05:19:08.971997 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-17 05:19:08.972008 | orchestrator | Friday 17 April 2026 05:17:52 +0000 (0:00:02.363) 0:01:11.465 ********** 2026-04-17 05:19:08.972018 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:19:08.972029 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.972060 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.972072 | orchestrator | 2026-04-17 05:19:08.972082 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-17 05:19:08.972094 | orchestrator | Friday 17 April 2026 05:17:54 +0000 (0:00:01.540) 0:01:13.006 ********** 2026-04-17 05:19:08.972104 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:19:08.972115 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.972126 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.972136 | orchestrator | 2026-04-17 05:19:08.972147 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-17 05:19:08.972157 | orchestrator | Friday 17 April 2026 05:17:55 +0000 (0:00:01.404) 0:01:14.410 ********** 2026-04-17 05:19:08.972188 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:08.972200 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:19:08.972210 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:19:08.972221 | orchestrator | 2026-04-17 05:19:08.972233 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-17 05:19:08.972251 | orchestrator | Friday 17 April 2026 05:17:57 +0000 (0:00:02.248) 0:01:16.659 ********** 2026-04-17 05:19:08.972270 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.972288 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.972305 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.972316 | orchestrator | 2026-04-17 05:19:08.972327 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-17 05:19:08.972337 | orchestrator | Friday 17 April 2026 05:18:00 +0000 (0:00:02.238) 0:01:18.898 ********** 2026-04-17 05:19:08.972348 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.972358 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.972369 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.972379 | orchestrator | 2026-04-17 05:19:08.972390 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-17 05:19:08.972401 | orchestrator | Friday 17 April 2026 05:18:01 +0000 (0:00:01.470) 0:01:20.369 ********** 2026-04-17 05:19:08.972420 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 05:19:08.972433 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 05:19:08.972444 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 05:19:08.972455 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 05:19:08.972465 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 05:19:08.972476 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 05:19:08.972487 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 05:19:08.972497 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 05:19:08.972508 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 05:19:08.972518 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.972529 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.972539 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.972550 | orchestrator | 2026-04-17 05:19:08.972561 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-17 05:19:08.972571 | orchestrator | Friday 17 April 2026 05:18:35 +0000 (0:00:33.671) 0:01:54.040 ********** 2026-04-17 05:19:08.972582 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:19:08.972593 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:08.972603 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:08.972614 | orchestrator | 2026-04-17 05:19:08.972624 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-17 05:19:08.972635 | orchestrator | Friday 17 April 2026 05:18:36 +0000 (0:00:01.424) 0:01:55.465 ********** 2026-04-17 05:19:08.972646 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:08.972656 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:19:08.972667 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:19:08.972678 | orchestrator | 2026-04-17 05:19:08.972689 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-17 05:19:08.972699 | orchestrator | Friday 17 April 2026 05:18:39 +0000 (0:00:02.432) 0:01:57.898 ********** 2026-04-17 05:19:08.972710 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.972720 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.972731 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.972741 | orchestrator | 2026-04-17 05:19:08.972752 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-17 05:19:08.972762 | orchestrator | Friday 17 April 2026 05:18:41 +0000 (0:00:02.361) 0:02:00.259 ********** 2026-04-17 05:19:08.972773 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:08.972784 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:19:08.972794 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:19:08.972805 | orchestrator | 2026-04-17 05:19:08.972815 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-17 05:19:08.972826 | orchestrator | Friday 17 April 2026 05:19:07 +0000 (0:00:25.782) 0:02:26.042 ********** 2026-04-17 05:19:08.972836 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:08.972847 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:08.972857 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:08.972868 | orchestrator | 2026-04-17 05:19:08.972879 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-17 05:19:08.972903 | orchestrator | Friday 17 April 2026 05:19:08 +0000 (0:00:01.709) 0:02:27.752 ********** 2026-04-17 05:19:57.804357 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:57.804504 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:57.804531 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:57.804551 | orchestrator | 2026-04-17 05:19:57.804571 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-17 05:19:57.804592 | orchestrator | Friday 17 April 2026 05:19:10 +0000 (0:00:01.774) 0:02:29.526 ********** 2026-04-17 05:19:57.804611 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:57.804629 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:19:57.804646 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:19:57.804664 | orchestrator | 2026-04-17 05:19:57.804682 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-17 05:19:57.804701 | orchestrator | Friday 17 April 2026 05:19:12 +0000 (0:00:01.749) 0:02:31.275 ********** 2026-04-17 05:19:57.804719 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:57.804737 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:57.804756 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:57.804774 | orchestrator | 2026-04-17 05:19:57.804792 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-17 05:19:57.804811 | orchestrator | Friday 17 April 2026 05:19:14 +0000 (0:00:01.672) 0:02:32.948 ********** 2026-04-17 05:19:57.804827 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:57.804838 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:57.804849 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:57.804862 | orchestrator | 2026-04-17 05:19:57.804874 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-17 05:19:57.804887 | orchestrator | Friday 17 April 2026 05:19:15 +0000 (0:00:01.640) 0:02:34.588 ********** 2026-04-17 05:19:57.804899 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:57.804911 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:19:57.804924 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:19:57.804936 | orchestrator | 2026-04-17 05:19:57.804949 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-17 05:19:57.804962 | orchestrator | Friday 17 April 2026 05:19:17 +0000 (0:00:01.740) 0:02:36.329 ********** 2026-04-17 05:19:57.804974 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:57.804987 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:57.804999 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:57.805011 | orchestrator | 2026-04-17 05:19:57.805076 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-17 05:19:57.805091 | orchestrator | Friday 17 April 2026 05:19:19 +0000 (0:00:01.780) 0:02:38.110 ********** 2026-04-17 05:19:57.805102 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:57.805113 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:19:57.805124 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:19:57.805135 | orchestrator | 2026-04-17 05:19:57.805145 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-17 05:19:57.805156 | orchestrator | Friday 17 April 2026 05:19:21 +0000 (0:00:01.846) 0:02:39.956 ********** 2026-04-17 05:19:57.805167 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:19:57.805178 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:19:57.805189 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:19:57.805200 | orchestrator | 2026-04-17 05:19:57.805210 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-17 05:19:57.805221 | orchestrator | Friday 17 April 2026 05:19:23 +0000 (0:00:02.156) 0:02:42.112 ********** 2026-04-17 05:19:57.805235 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:19:57.805253 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:57.805264 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:57.805275 | orchestrator | 2026-04-17 05:19:57.805286 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-17 05:19:57.805297 | orchestrator | Friday 17 April 2026 05:19:24 +0000 (0:00:01.366) 0:02:43.479 ********** 2026-04-17 05:19:57.805336 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:19:57.805348 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:19:57.805358 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:19:57.805369 | orchestrator | 2026-04-17 05:19:57.805379 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-17 05:19:57.805390 | orchestrator | Friday 17 April 2026 05:19:26 +0000 (0:00:01.358) 0:02:44.837 ********** 2026-04-17 05:19:57.805401 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:57.805411 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:57.805422 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:57.805433 | orchestrator | 2026-04-17 05:19:57.805443 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-17 05:19:57.805454 | orchestrator | Friday 17 April 2026 05:19:27 +0000 (0:00:01.797) 0:02:46.635 ********** 2026-04-17 05:19:57.805465 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:19:57.805476 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:19:57.805486 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:19:57.805497 | orchestrator | 2026-04-17 05:19:57.805508 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-17 05:19:57.805521 | orchestrator | Friday 17 April 2026 05:19:29 +0000 (0:00:01.913) 0:02:48.548 ********** 2026-04-17 05:19:57.805532 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 05:19:57.805543 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 05:19:57.805553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 05:19:57.805564 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 05:19:57.805591 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 05:19:57.805603 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 05:19:57.805618 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 05:19:57.805630 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 05:19:57.805664 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 05:19:57.805675 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-17 05:19:57.805686 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 05:19:57.805696 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 05:19:57.805707 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-17 05:19:57.805718 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 05:19:57.805728 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 05:19:57.805739 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 05:19:57.805749 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 05:19:57.805760 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 05:19:57.805771 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 05:19:57.805781 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 05:19:57.805792 | orchestrator | 2026-04-17 05:19:57.805803 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-17 05:19:57.805821 | orchestrator | 2026-04-17 05:19:57.805832 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-17 05:19:57.805843 | orchestrator | Friday 17 April 2026 05:19:34 +0000 (0:00:04.347) 0:02:52.896 ********** 2026-04-17 05:19:57.805854 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:19:57.805864 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:19:57.805875 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:19:57.805886 | orchestrator | 2026-04-17 05:19:57.805897 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-17 05:19:57.805907 | orchestrator | Friday 17 April 2026 05:19:35 +0000 (0:00:01.663) 0:02:54.560 ********** 2026-04-17 05:19:57.805918 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:19:57.805929 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:19:57.805939 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:19:57.805950 | orchestrator | 2026-04-17 05:19:57.805961 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-17 05:19:57.805972 | orchestrator | Friday 17 April 2026 05:19:37 +0000 (0:00:01.687) 0:02:56.247 ********** 2026-04-17 05:19:57.805982 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:19:57.805993 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:19:57.806003 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:19:57.806014 | orchestrator | 2026-04-17 05:19:57.806106 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-17 05:19:57.806118 | orchestrator | Friday 17 April 2026 05:19:38 +0000 (0:00:01.466) 0:02:57.714 ********** 2026-04-17 05:19:57.806128 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:19:57.806140 | orchestrator | 2026-04-17 05:19:57.806151 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-17 05:19:57.806161 | orchestrator | Friday 17 April 2026 05:19:40 +0000 (0:00:02.047) 0:02:59.762 ********** 2026-04-17 05:19:57.806172 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:19:57.806183 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:19:57.806194 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:19:57.806205 | orchestrator | 2026-04-17 05:19:57.806216 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-17 05:19:57.806227 | orchestrator | Friday 17 April 2026 05:19:42 +0000 (0:00:01.457) 0:03:01.220 ********** 2026-04-17 05:19:57.806238 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:19:57.806248 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:19:57.806259 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:19:57.806270 | orchestrator | 2026-04-17 05:19:57.806281 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-17 05:19:57.806291 | orchestrator | Friday 17 April 2026 05:19:43 +0000 (0:00:01.448) 0:03:02.669 ********** 2026-04-17 05:19:57.806302 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:19:57.806313 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:19:57.806324 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:19:57.806334 | orchestrator | 2026-04-17 05:19:57.806345 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-17 05:19:57.806355 | orchestrator | Friday 17 April 2026 05:19:45 +0000 (0:00:01.511) 0:03:04.181 ********** 2026-04-17 05:19:57.806366 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:19:57.806377 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:19:57.806388 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:19:57.806398 | orchestrator | 2026-04-17 05:19:57.806409 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-17 05:19:57.806420 | orchestrator | Friday 17 April 2026 05:19:47 +0000 (0:00:01.711) 0:03:05.892 ********** 2026-04-17 05:19:57.806431 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:19:57.806442 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:19:57.806452 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:19:57.806463 | orchestrator | 2026-04-17 05:19:57.806474 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-17 05:19:57.806485 | orchestrator | Friday 17 April 2026 05:19:49 +0000 (0:00:02.213) 0:03:08.105 ********** 2026-04-17 05:19:57.806503 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:19:57.806514 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:19:57.806530 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:19:57.806541 | orchestrator | 2026-04-17 05:19:57.806552 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-17 05:19:57.806563 | orchestrator | Friday 17 April 2026 05:19:51 +0000 (0:00:02.287) 0:03:10.393 ********** 2026-04-17 05:19:57.806580 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:21:09.954766 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:21:09.954882 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:21:09.954899 | orchestrator | 2026-04-17 05:21:09.954911 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-17 05:21:09.954924 | orchestrator | 2026-04-17 05:21:09.954936 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-17 05:21:09.954947 | orchestrator | Friday 17 April 2026 05:19:59 +0000 (0:00:08.295) 0:03:18.688 ********** 2026-04-17 05:21:09.954959 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.954971 | orchestrator | 2026-04-17 05:21:09.954982 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-17 05:21:09.954993 | orchestrator | Friday 17 April 2026 05:20:02 +0000 (0:00:02.257) 0:03:20.946 ********** 2026-04-17 05:21:09.955003 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955014 | orchestrator | 2026-04-17 05:21:09.955025 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-17 05:21:09.955036 | orchestrator | Friday 17 April 2026 05:20:03 +0000 (0:00:01.462) 0:03:22.408 ********** 2026-04-17 05:21:09.955047 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-17 05:21:09.955058 | orchestrator | 2026-04-17 05:21:09.955069 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-17 05:21:09.955080 | orchestrator | Friday 17 April 2026 05:20:05 +0000 (0:00:01.576) 0:03:23.985 ********** 2026-04-17 05:21:09.955091 | orchestrator | changed: [testbed-manager] 2026-04-17 05:21:09.955102 | orchestrator | 2026-04-17 05:21:09.955112 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-17 05:21:09.955123 | orchestrator | Friday 17 April 2026 05:20:07 +0000 (0:00:02.032) 0:03:26.018 ********** 2026-04-17 05:21:09.955134 | orchestrator | changed: [testbed-manager] 2026-04-17 05:21:09.955144 | orchestrator | 2026-04-17 05:21:09.955180 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-17 05:21:09.955192 | orchestrator | Friday 17 April 2026 05:20:09 +0000 (0:00:01.975) 0:03:27.994 ********** 2026-04-17 05:21:09.955202 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 05:21:09.955213 | orchestrator | 2026-04-17 05:21:09.955224 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-17 05:21:09.955234 | orchestrator | Friday 17 April 2026 05:20:12 +0000 (0:00:03.277) 0:03:31.272 ********** 2026-04-17 05:21:09.955245 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 05:21:09.955256 | orchestrator | 2026-04-17 05:21:09.955266 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-17 05:21:09.955277 | orchestrator | Friday 17 April 2026 05:20:14 +0000 (0:00:02.008) 0:03:33.280 ********** 2026-04-17 05:21:09.955290 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955303 | orchestrator | 2026-04-17 05:21:09.955315 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-17 05:21:09.955327 | orchestrator | Friday 17 April 2026 05:20:16 +0000 (0:00:01.576) 0:03:34.857 ********** 2026-04-17 05:21:09.955339 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955352 | orchestrator | 2026-04-17 05:21:09.955364 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-17 05:21:09.955376 | orchestrator | 2026-04-17 05:21:09.955389 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-17 05:21:09.955401 | orchestrator | Friday 17 April 2026 05:20:17 +0000 (0:00:01.653) 0:03:36.510 ********** 2026-04-17 05:21:09.955439 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955452 | orchestrator | 2026-04-17 05:21:09.955465 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-17 05:21:09.955478 | orchestrator | Friday 17 April 2026 05:20:18 +0000 (0:00:01.238) 0:03:37.749 ********** 2026-04-17 05:21:09.955491 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 05:21:09.955504 | orchestrator | 2026-04-17 05:21:09.955516 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-17 05:21:09.955528 | orchestrator | Friday 17 April 2026 05:20:20 +0000 (0:00:01.641) 0:03:39.390 ********** 2026-04-17 05:21:09.955540 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955553 | orchestrator | 2026-04-17 05:21:09.955565 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-17 05:21:09.955577 | orchestrator | Friday 17 April 2026 05:20:22 +0000 (0:00:01.909) 0:03:41.300 ********** 2026-04-17 05:21:09.955589 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955601 | orchestrator | 2026-04-17 05:21:09.955614 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-17 05:21:09.955626 | orchestrator | Friday 17 April 2026 05:20:25 +0000 (0:00:02.977) 0:03:44.277 ********** 2026-04-17 05:21:09.955636 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955646 | orchestrator | 2026-04-17 05:21:09.955657 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-17 05:21:09.955667 | orchestrator | Friday 17 April 2026 05:20:26 +0000 (0:00:01.477) 0:03:45.755 ********** 2026-04-17 05:21:09.955678 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955688 | orchestrator | 2026-04-17 05:21:09.955699 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-17 05:21:09.955709 | orchestrator | Friday 17 April 2026 05:20:28 +0000 (0:00:01.592) 0:03:47.347 ********** 2026-04-17 05:21:09.955719 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955730 | orchestrator | 2026-04-17 05:21:09.955740 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-17 05:21:09.955751 | orchestrator | Friday 17 April 2026 05:20:30 +0000 (0:00:01.710) 0:03:49.058 ********** 2026-04-17 05:21:09.955761 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955771 | orchestrator | 2026-04-17 05:21:09.955782 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-17 05:21:09.955793 | orchestrator | Friday 17 April 2026 05:20:33 +0000 (0:00:02.743) 0:03:51.802 ********** 2026-04-17 05:21:09.955820 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:09.955831 | orchestrator | 2026-04-17 05:21:09.955842 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-17 05:21:09.955853 | orchestrator | 2026-04-17 05:21:09.955864 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-17 05:21:09.955891 | orchestrator | Friday 17 April 2026 05:20:34 +0000 (0:00:01.977) 0:03:53.779 ********** 2026-04-17 05:21:09.955903 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:21:09.955914 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:21:09.955924 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:21:09.955935 | orchestrator | 2026-04-17 05:21:09.955946 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-17 05:21:09.955957 | orchestrator | Friday 17 April 2026 05:20:36 +0000 (0:00:01.395) 0:03:55.174 ********** 2026-04-17 05:21:09.955967 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:21:09.955978 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:21:09.955989 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:21:09.956000 | orchestrator | 2026-04-17 05:21:09.956010 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-17 05:21:09.956021 | orchestrator | Friday 17 April 2026 05:20:37 +0000 (0:00:01.448) 0:03:56.624 ********** 2026-04-17 05:21:09.956032 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:21:09.956051 | orchestrator | 2026-04-17 05:21:09.956061 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-17 05:21:09.956072 | orchestrator | Friday 17 April 2026 05:20:39 +0000 (0:00:01.965) 0:03:58.589 ********** 2026-04-17 05:21:09.956083 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956094 | orchestrator | 2026-04-17 05:21:09.956104 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-17 05:21:09.956115 | orchestrator | Friday 17 April 2026 05:20:41 +0000 (0:00:01.952) 0:04:00.542 ********** 2026-04-17 05:21:09.956126 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956137 | orchestrator | 2026-04-17 05:21:09.956148 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-17 05:21:09.956175 | orchestrator | Friday 17 April 2026 05:20:43 +0000 (0:00:02.018) 0:04:02.560 ********** 2026-04-17 05:21:09.956186 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:21:09.956197 | orchestrator | 2026-04-17 05:21:09.956208 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-17 05:21:09.956219 | orchestrator | Friday 17 April 2026 05:20:44 +0000 (0:00:01.132) 0:04:03.693 ********** 2026-04-17 05:21:09.956230 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956240 | orchestrator | 2026-04-17 05:21:09.956251 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-17 05:21:09.956262 | orchestrator | Friday 17 April 2026 05:20:47 +0000 (0:00:02.133) 0:04:05.826 ********** 2026-04-17 05:21:09.956273 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956283 | orchestrator | 2026-04-17 05:21:09.956294 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-17 05:21:09.956305 | orchestrator | Friday 17 April 2026 05:20:49 +0000 (0:00:02.455) 0:04:08.282 ********** 2026-04-17 05:21:09.956315 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956326 | orchestrator | 2026-04-17 05:21:09.956337 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-17 05:21:09.956347 | orchestrator | Friday 17 April 2026 05:20:50 +0000 (0:00:01.189) 0:04:09.471 ********** 2026-04-17 05:21:09.956358 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956369 | orchestrator | 2026-04-17 05:21:09.956380 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-17 05:21:09.956390 | orchestrator | Friday 17 April 2026 05:20:51 +0000 (0:00:01.236) 0:04:10.707 ********** 2026-04-17 05:21:09.956401 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-04-17 05:21:09.956412 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-04-17 05:21:09.956423 | orchestrator | } 2026-04-17 05:21:09.956434 | orchestrator | 2026-04-17 05:21:09.956445 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-17 05:21:09.956456 | orchestrator | Friday 17 April 2026 05:20:53 +0000 (0:00:01.237) 0:04:11.944 ********** 2026-04-17 05:21:09.956467 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:21:09.956477 | orchestrator | 2026-04-17 05:21:09.956488 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-17 05:21:09.956499 | orchestrator | Friday 17 April 2026 05:20:54 +0000 (0:00:01.173) 0:04:13.118 ********** 2026-04-17 05:21:09.956509 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-17 05:21:09.956520 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-17 05:21:09.956531 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-17 05:21:09.956541 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-17 05:21:09.956552 | orchestrator | 2026-04-17 05:21:09.956563 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-17 05:21:09.956573 | orchestrator | Friday 17 April 2026 05:21:00 +0000 (0:00:05.976) 0:04:19.094 ********** 2026-04-17 05:21:09.956584 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956602 | orchestrator | 2026-04-17 05:21:09.956613 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-17 05:21:09.956623 | orchestrator | Friday 17 April 2026 05:21:02 +0000 (0:00:02.497) 0:04:21.591 ********** 2026-04-17 05:21:09.956634 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956645 | orchestrator | 2026-04-17 05:21:09.956656 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-17 05:21:09.956666 | orchestrator | Friday 17 April 2026 05:21:05 +0000 (0:00:02.773) 0:04:24.365 ********** 2026-04-17 05:21:09.956677 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 05:21:09.956688 | orchestrator | 2026-04-17 05:21:09.956708 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-17 05:21:09.956726 | orchestrator | Friday 17 April 2026 05:21:09 +0000 (0:00:04.210) 0:04:28.576 ********** 2026-04-17 05:21:09.956745 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:21:09.956764 | orchestrator | 2026-04-17 05:21:09.956791 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-17 05:21:44.959564 | orchestrator | Friday 17 April 2026 05:21:10 +0000 (0:00:01.172) 0:04:29.748 ********** 2026-04-17 05:21:44.959737 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-17 05:21:44.959757 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-17 05:21:44.959769 | orchestrator | 2026-04-17 05:21:44.959781 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-17 05:21:44.959793 | orchestrator | Friday 17 April 2026 05:21:14 +0000 (0:00:03.097) 0:04:32.846 ********** 2026-04-17 05:21:44.959804 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:21:44.959817 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:21:44.959828 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:21:44.959839 | orchestrator | 2026-04-17 05:21:44.959851 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-17 05:21:44.959865 | orchestrator | Friday 17 April 2026 05:21:15 +0000 (0:00:01.467) 0:04:34.314 ********** 2026-04-17 05:21:44.959877 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:21:44.959912 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:21:44.959924 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:21:44.959937 | orchestrator | 2026-04-17 05:21:44.959950 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-17 05:21:44.959962 | orchestrator | 2026-04-17 05:21:44.959975 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-17 05:21:44.959988 | orchestrator | Friday 17 April 2026 05:21:17 +0000 (0:00:02.453) 0:04:36.767 ********** 2026-04-17 05:21:44.960001 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:44.960013 | orchestrator | 2026-04-17 05:21:44.960025 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-17 05:21:44.960038 | orchestrator | Friday 17 April 2026 05:21:19 +0000 (0:00:01.163) 0:04:37.930 ********** 2026-04-17 05:21:44.960052 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 05:21:44.960077 | orchestrator | 2026-04-17 05:21:44.960090 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-17 05:21:44.960102 | orchestrator | Friday 17 April 2026 05:21:20 +0000 (0:00:01.479) 0:04:39.410 ********** 2026-04-17 05:21:44.960114 | orchestrator | ok: [testbed-manager] 2026-04-17 05:21:44.960127 | orchestrator | 2026-04-17 05:21:44.960139 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-17 05:21:44.960151 | orchestrator | 2026-04-17 05:21:44.960163 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-17 05:21:44.960177 | orchestrator | Friday 17 April 2026 05:21:26 +0000 (0:00:05.535) 0:04:44.945 ********** 2026-04-17 05:21:44.960189 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:21:44.960202 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:21:44.960214 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:21:44.960261 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:21:44.960320 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:21:44.960344 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:21:44.960361 | orchestrator | 2026-04-17 05:21:44.960378 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-17 05:21:44.960396 | orchestrator | Friday 17 April 2026 05:21:27 +0000 (0:00:01.804) 0:04:46.750 ********** 2026-04-17 05:21:44.960414 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 05:21:44.960432 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 05:21:44.960449 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 05:21:44.960466 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 05:21:44.960484 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 05:21:44.960501 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 05:21:44.960519 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 05:21:44.960535 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 05:21:44.960551 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 05:21:44.960568 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 05:21:44.960585 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 05:21:44.960602 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 05:21:44.960619 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 05:21:44.960639 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 05:21:44.960657 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 05:21:44.960676 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 05:21:44.960694 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 05:21:44.960712 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 05:21:44.960733 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 05:21:44.960745 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 05:21:44.960756 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 05:21:44.960790 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 05:21:44.960802 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 05:21:44.960812 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 05:21:44.960823 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 05:21:44.960833 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 05:21:44.960844 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 05:21:44.960854 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 05:21:44.960865 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 05:21:44.960875 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 05:21:44.960886 | orchestrator | 2026-04-17 05:21:44.960897 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-17 05:21:44.960921 | orchestrator | Friday 17 April 2026 05:21:39 +0000 (0:00:11.518) 0:04:58.269 ********** 2026-04-17 05:21:44.960973 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:21:44.960993 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:21:44.961015 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:21:44.961042 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:21:44.961058 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:21:44.961075 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:21:44.961092 | orchestrator | 2026-04-17 05:21:44.961110 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-17 05:21:44.961127 | orchestrator | Friday 17 April 2026 05:21:41 +0000 (0:00:01.991) 0:05:00.260 ********** 2026-04-17 05:21:44.961145 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:21:44.961163 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:21:44.961182 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:21:44.961200 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:21:44.961249 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:21:44.961270 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:21:44.961288 | orchestrator | 2026-04-17 05:21:44.961305 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:21:44.961322 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 05:21:44.961343 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 05:21:44.961362 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 05:21:44.961381 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 05:21:44.961400 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 05:21:44.961418 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 05:21:44.961438 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 05:21:44.961456 | orchestrator | 2026-04-17 05:21:44.961474 | orchestrator | 2026-04-17 05:21:44.961492 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:21:44.961512 | orchestrator | Friday 17 April 2026 05:21:44 +0000 (0:00:03.472) 0:05:03.733 ********** 2026-04-17 05:21:44.961529 | orchestrator | =============================================================================== 2026-04-17 05:21:44.961547 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 33.67s 2026-04-17 05:21:44.961565 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.78s 2026-04-17 05:21:44.961584 | orchestrator | Manage labels ---------------------------------------------------------- 11.52s 2026-04-17 05:21:44.961603 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.30s 2026-04-17 05:21:44.961621 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.98s 2026-04-17 05:21:44.961640 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.54s 2026-04-17 05:21:44.961659 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.54s 2026-04-17 05:21:44.961678 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.35s 2026-04-17 05:21:44.961698 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.21s 2026-04-17 05:21:44.961730 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.59s 2026-04-17 05:21:44.961742 | orchestrator | Manage taints ----------------------------------------------------------- 3.47s 2026-04-17 05:21:44.961753 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.28s 2026-04-17 05:21:44.961775 | orchestrator | k3s_prereq : Add br_netfilter to /etc/modules-load.d/ ------------------- 3.21s 2026-04-17 05:21:45.342295 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 3.19s 2026-04-17 05:21:45.342380 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.10s 2026-04-17 05:21:45.342391 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 3.07s 2026-04-17 05:21:45.342399 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.98s 2026-04-17 05:21:45.342407 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.80s 2026-04-17 05:21:45.342414 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.77s 2026-04-17 05:21:45.342421 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.75s 2026-04-17 05:21:45.600885 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-17 05:21:45.600984 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-04-17 05:21:45.606353 | orchestrator | + set -e 2026-04-17 05:21:45.606392 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 05:21:45.606404 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 05:21:45.606415 | orchestrator | ++ INTERACTIVE=false 2026-04-17 05:21:45.606427 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 05:21:45.606632 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 05:21:45.606653 | orchestrator | + osism apply openstackclient 2026-04-17 05:21:57.051799 | orchestrator | 2026-04-17 05:21:57 | INFO  | Prepare task for execution of openstackclient. 2026-04-17 05:21:57.132879 | orchestrator | 2026-04-17 05:21:57 | INFO  | Task 52d1acc0-c573-49b4-9971-c16974d5c433 (openstackclient) was prepared for execution. 2026-04-17 05:21:57.132969 | orchestrator | 2026-04-17 05:21:57 | INFO  | It takes a moment until task 52d1acc0-c573-49b4-9971-c16974d5c433 (openstackclient) has been started and output is visible here. 2026-04-17 05:22:24.157510 | orchestrator | 2026-04-17 05:22:24.157628 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-17 05:22:24.157644 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 05:22:24.157659 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 05:22:24.157682 | orchestrator | 2026-04-17 05:22:24.157693 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-17 05:22:24.157704 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 05:22:24.157714 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 05:22:24.157736 | orchestrator | Friday 17 April 2026 05:22:02 +0000 (0:00:01.656) 0:00:01.656 ********** 2026-04-17 05:22:24.157748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-17 05:22:24.157760 | orchestrator | 2026-04-17 05:22:24.157771 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-17 05:22:24.157782 | orchestrator | Friday 17 April 2026 05:22:03 +0000 (0:00:00.791) 0:00:02.447 ********** 2026-04-17 05:22:24.157793 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-17 05:22:24.157804 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-17 05:22:24.157814 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-17 05:22:24.157850 | orchestrator | 2026-04-17 05:22:24.157862 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-17 05:22:24.157873 | orchestrator | Friday 17 April 2026 05:22:05 +0000 (0:00:01.751) 0:00:04.199 ********** 2026-04-17 05:22:24.157884 | orchestrator | changed: [testbed-manager] 2026-04-17 05:22:24.157895 | orchestrator | 2026-04-17 05:22:24.157906 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-17 05:22:24.157916 | orchestrator | Friday 17 April 2026 05:22:06 +0000 (0:00:01.386) 0:00:05.586 ********** 2026-04-17 05:22:24.157927 | orchestrator | ok: [testbed-manager] 2026-04-17 05:22:24.157939 | orchestrator | 2026-04-17 05:22:24.157949 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-17 05:22:24.157960 | orchestrator | Friday 17 April 2026 05:22:07 +0000 (0:00:01.093) 0:00:06.679 ********** 2026-04-17 05:22:24.157971 | orchestrator | ok: [testbed-manager] 2026-04-17 05:22:24.157981 | orchestrator | 2026-04-17 05:22:24.157992 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-17 05:22:24.158003 | orchestrator | Friday 17 April 2026 05:22:08 +0000 (0:00:01.095) 0:00:07.775 ********** 2026-04-17 05:22:24.158072 | orchestrator | ok: [testbed-manager] 2026-04-17 05:22:24.158086 | orchestrator | 2026-04-17 05:22:24.158099 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-17 05:22:24.158111 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 05:22:24.158124 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 05:22:24.158149 | orchestrator | Friday 17 April 2026 05:22:09 +0000 (0:00:00.887) 0:00:08.663 ********** 2026-04-17 05:22:24.158162 | orchestrator | changed: [testbed-manager] 2026-04-17 05:22:24.158174 | orchestrator | 2026-04-17 05:22:24.158187 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-17 05:22:24.158199 | orchestrator | Friday 17 April 2026 05:22:20 +0000 (0:00:11.166) 0:00:19.829 ********** 2026-04-17 05:22:24.158211 | orchestrator | changed: [testbed-manager] 2026-04-17 05:22:24.158223 | orchestrator | 2026-04-17 05:22:24.158235 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-17 05:22:24.158248 | orchestrator | Friday 17 April 2026 05:22:21 +0000 (0:00:00.928) 0:00:20.757 ********** 2026-04-17 05:22:24.158267 | orchestrator | changed: [testbed-manager] 2026-04-17 05:22:24.158346 | orchestrator | 2026-04-17 05:22:24.158372 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-17 05:22:24.158390 | orchestrator | Friday 17 April 2026 05:22:22 +0000 (0:00:00.655) 0:00:21.413 ********** 2026-04-17 05:22:24.158408 | orchestrator | ok: [testbed-manager] 2026-04-17 05:22:24.158426 | orchestrator | 2026-04-17 05:22:24.158444 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:22:24.158463 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 05:22:24.158482 | orchestrator | 2026-04-17 05:22:24.158500 | orchestrator | 2026-04-17 05:22:24.158518 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:22:24.158537 | orchestrator | Friday 17 April 2026 05:22:23 +0000 (0:00:01.213) 0:00:22.626 ********** 2026-04-17 05:22:24.158556 | orchestrator | =============================================================================== 2026-04-17 05:22:24.158574 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.17s 2026-04-17 05:22:24.158585 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.75s 2026-04-17 05:22:24.158596 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.39s 2026-04-17 05:22:24.158607 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.21s 2026-04-17 05:22:24.158617 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.10s 2026-04-17 05:22:24.158642 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.09s 2026-04-17 05:22:24.158674 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.93s 2026-04-17 05:22:24.158685 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.89s 2026-04-17 05:22:24.158696 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.79s 2026-04-17 05:22:24.158707 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.66s 2026-04-17 05:22:24.378001 | orchestrator | + osism apply -a upgrade common 2026-04-17 05:22:25.821763 | orchestrator | 2026-04-17 05:22:25 | INFO  | Prepare task for execution of common. 2026-04-17 05:22:25.923489 | orchestrator | 2026-04-17 05:22:25 | INFO  | Task b6ce2f7d-e15d-4347-986b-cd7e25497b7c (common) was prepared for execution. 2026-04-17 05:22:25.923577 | orchestrator | 2026-04-17 05:22:25 | INFO  | It takes a moment until task b6ce2f7d-e15d-4347-986b-cd7e25497b7c (common) has been started and output is visible here. 2026-04-17 05:22:45.701753 | orchestrator | 2026-04-17 05:22:45.701870 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-17 05:22:45.701889 | orchestrator | 2026-04-17 05:22:45.701901 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-17 05:22:45.701912 | orchestrator | Friday 17 April 2026 05:22:31 +0000 (0:00:02.151) 0:00:02.151 ********** 2026-04-17 05:22:45.701924 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:22:45.701936 | orchestrator | 2026-04-17 05:22:45.701947 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-17 05:22:45.701958 | orchestrator | Friday 17 April 2026 05:22:35 +0000 (0:00:03.777) 0:00:05.929 ********** 2026-04-17 05:22:45.701969 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:22:45.701980 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:22:45.702005 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:22:45.702075 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:22:45.702088 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:22:45.702099 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:22:45.702111 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:22:45.702122 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:22:45.702152 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:22:45.702163 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:22:45.702174 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:22:45.702185 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:22:45.702196 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:22:45.702206 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:22:45.702221 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:22:45.702232 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:22:45.702243 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:22:45.702256 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:22:45.702269 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:22:45.702304 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:22:45.702316 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:22:45.702359 | orchestrator | 2026-04-17 05:22:45.702373 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-17 05:22:45.702385 | orchestrator | Friday 17 April 2026 05:22:40 +0000 (0:00:05.020) 0:00:10.949 ********** 2026-04-17 05:22:45.702398 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:22:45.702412 | orchestrator | 2026-04-17 05:22:45.702424 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-17 05:22:45.702436 | orchestrator | Friday 17 April 2026 05:22:43 +0000 (0:00:02.806) 0:00:13.755 ********** 2026-04-17 05:22:45.702452 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:22:45.702475 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:22:45.702516 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:22:45.702531 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:22:45.702543 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:22:45.702560 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:45.702583 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:45.702596 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:22:45.702609 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:45.702652 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.147929 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148037 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:22:49.148133 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148161 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148178 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148197 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148215 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148256 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148277 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148617 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148645 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:22:49.148673 | orchestrator | 2026-04-17 05:22:49.148688 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-17 05:22:49.148702 | orchestrator | Friday 17 April 2026 05:22:48 +0000 (0:00:05.044) 0:00:18.799 ********** 2026-04-17 05:22:49.148718 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:49.148732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:49.148745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:49.148759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:49.148787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:50.537054 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537199 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:22:50.537280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:50.537310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:50.537333 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:22:50.537400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537424 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:22:50.537497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537538 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:22:50.537550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537560 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:22:50.537574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:50.537587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:50.537600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537626 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:22:50.537640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:50.537667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.445450 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:22:53.445551 | orchestrator | 2026-04-17 05:22:53.445569 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-17 05:22:53.445583 | orchestrator | Friday 17 April 2026 05:22:51 +0000 (0:00:03.254) 0:00:22.054 ********** 2026-04-17 05:22:53.445598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:53.445613 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:53.445757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:53.445779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.445792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.445804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.445862 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.445880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.445893 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:22:53.445905 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:22:53.445919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:53.445932 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.445945 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:22:53.445958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:53.445972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:22:53.445985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.446006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:22:53.446098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:06.808486 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:23:06.808605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:23:06.808625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:06.808638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:06.808651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:06.808664 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:23:06.808675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:06.808709 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:23:06.808721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:06.808732 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:23:06.808743 | orchestrator | 2026-04-17 05:23:06.808755 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-17 05:23:06.808767 | orchestrator | Friday 17 April 2026 05:22:55 +0000 (0:00:03.747) 0:00:25.801 ********** 2026-04-17 05:23:06.808777 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:23:06.808788 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:23:06.808799 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:23:06.808809 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:23:06.808820 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:23:06.808830 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:23:06.808841 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:23:06.808851 | orchestrator | 2026-04-17 05:23:06.808862 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-17 05:23:06.808873 | orchestrator | Friday 17 April 2026 05:22:57 +0000 (0:00:02.053) 0:00:27.854 ********** 2026-04-17 05:23:06.808883 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:23:06.808894 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:23:06.808904 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:23:06.808915 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:23:06.808941 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:23:06.808952 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:23:06.808979 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:23:06.808992 | orchestrator | 2026-04-17 05:23:06.809005 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-17 05:23:06.809018 | orchestrator | Friday 17 April 2026 05:22:59 +0000 (0:00:02.422) 0:00:30.277 ********** 2026-04-17 05:23:06.809030 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:23:06.809043 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:23:06.809055 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:23:06.809068 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:23:06.809080 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:23:06.809093 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:23:06.809106 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:23:06.809118 | orchestrator | 2026-04-17 05:23:06.809131 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-17 05:23:06.809144 | orchestrator | Friday 17 April 2026 05:23:02 +0000 (0:00:02.212) 0:00:32.489 ********** 2026-04-17 05:23:06.809156 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:23:06.809168 | orchestrator | changed: [testbed-manager] 2026-04-17 05:23:06.809180 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:23:06.809193 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:23:06.809205 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:23:06.809217 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:23:06.809229 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:23:06.809242 | orchestrator | 2026-04-17 05:23:06.809254 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-17 05:23:06.809267 | orchestrator | Friday 17 April 2026 05:23:04 +0000 (0:00:02.835) 0:00:35.325 ********** 2026-04-17 05:23:06.809281 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:06.809302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:06.809316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:06.809329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:06.809342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:06.809362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:11.081257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:11.081354 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081550 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:11.081605 | orchestrator | 2026-04-17 05:23:11.081614 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-17 05:23:11.081623 | orchestrator | Friday 17 April 2026 05:23:10 +0000 (0:00:05.109) 0:00:40.434 ********** 2026-04-17 05:23:11.081631 | orchestrator | [WARNING]: Skipped 2026-04-17 05:23:11.081647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-17 05:23:30.347938 | orchestrator | to this access issue: 2026-04-17 05:23:30.348034 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-17 05:23:30.348066 | orchestrator | directory 2026-04-17 05:23:30.348078 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 05:23:30.348089 | orchestrator | 2026-04-17 05:23:30.348100 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-17 05:23:30.348112 | orchestrator | Friday 17 April 2026 05:23:12 +0000 (0:00:02.512) 0:00:42.947 ********** 2026-04-17 05:23:30.348122 | orchestrator | [WARNING]: Skipped 2026-04-17 05:23:30.348134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-17 05:23:30.348141 | orchestrator | to this access issue: 2026-04-17 05:23:30.348147 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-17 05:23:30.348153 | orchestrator | directory 2026-04-17 05:23:30.348159 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 05:23:30.348184 | orchestrator | 2026-04-17 05:23:30.348192 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-17 05:23:30.348198 | orchestrator | Friday 17 April 2026 05:23:14 +0000 (0:00:01.925) 0:00:44.873 ********** 2026-04-17 05:23:30.348204 | orchestrator | [WARNING]: Skipped 2026-04-17 05:23:30.348210 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-17 05:23:30.348216 | orchestrator | to this access issue: 2026-04-17 05:23:30.348223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-17 05:23:30.348229 | orchestrator | directory 2026-04-17 05:23:30.348235 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 05:23:30.348254 | orchestrator | 2026-04-17 05:23:30.348276 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-17 05:23:30.348286 | orchestrator | Friday 17 April 2026 05:23:16 +0000 (0:00:02.291) 0:00:47.165 ********** 2026-04-17 05:23:30.348296 | orchestrator | [WARNING]: Skipped 2026-04-17 05:23:30.348307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-17 05:23:30.348317 | orchestrator | to this access issue: 2026-04-17 05:23:30.348327 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-17 05:23:30.348333 | orchestrator | directory 2026-04-17 05:23:30.348339 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 05:23:30.348346 | orchestrator | 2026-04-17 05:23:30.348352 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-17 05:23:30.348358 | orchestrator | Friday 17 April 2026 05:23:18 +0000 (0:00:01.970) 0:00:49.135 ********** 2026-04-17 05:23:30.348364 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:23:30.348370 | orchestrator | changed: [testbed-manager] 2026-04-17 05:23:30.348376 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:23:30.348381 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:23:30.348388 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:23:30.348393 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:23:30.348399 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:23:30.348405 | orchestrator | 2026-04-17 05:23:30.348427 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-17 05:23:30.348434 | orchestrator | Friday 17 April 2026 05:23:22 +0000 (0:00:03.983) 0:00:53.118 ********** 2026-04-17 05:23:30.348441 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:23:30.348449 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:23:30.348455 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:23:30.348461 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:23:30.348467 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:23:30.348474 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:23:30.348489 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:23:30.348496 | orchestrator | 2026-04-17 05:23:30.348503 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-17 05:23:30.348510 | orchestrator | Friday 17 April 2026 05:23:26 +0000 (0:00:03.380) 0:00:56.498 ********** 2026-04-17 05:23:30.348517 | orchestrator | ok: [testbed-manager] 2026-04-17 05:23:30.348524 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:23:30.348531 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:23:30.348538 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:23:30.348545 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:23:30.348552 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:23:30.348559 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:23:30.348566 | orchestrator | 2026-04-17 05:23:30.348573 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-17 05:23:30.348580 | orchestrator | Friday 17 April 2026 05:23:29 +0000 (0:00:03.232) 0:00:59.731 ********** 2026-04-17 05:23:30.348602 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:30.348628 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:30.348637 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:30.348644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:30.348651 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:30.348664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:30.348672 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:30.348680 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:30.348693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:39.989240 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:39.989375 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:39.989396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:39.989410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:39.989476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:39.989490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:39.989508 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:39.989540 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:39.989553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:39.989571 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:39.989589 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:39.989619 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:39.989638 | orchestrator | 2026-04-17 05:23:39.989658 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-17 05:23:39.989678 | orchestrator | Friday 17 April 2026 05:23:32 +0000 (0:00:02.925) 0:01:02.656 ********** 2026-04-17 05:23:39.989695 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:23:39.989715 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:23:39.989733 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:23:39.989752 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:23:39.989773 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:23:39.989792 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:23:39.989809 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:23:39.989825 | orchestrator | 2026-04-17 05:23:39.989838 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-17 05:23:39.989851 | orchestrator | Friday 17 April 2026 05:23:35 +0000 (0:00:03.210) 0:01:05.867 ********** 2026-04-17 05:23:39.989863 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:23:39.989875 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:23:39.989887 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:23:39.989899 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:23:39.989912 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:23:39.989931 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:23:39.989942 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:23:39.989956 | orchestrator | 2026-04-17 05:23:39.989975 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-17 05:23:39.989989 | orchestrator | Friday 17 April 2026 05:23:38 +0000 (0:00:03.471) 0:01:09.339 ********** 2026-04-17 05:23:39.990015 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:41.521299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:41.521487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:41.521547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:41.521563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:41.521575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:41.521587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:23:41.521615 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:41.521650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:41.521671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:41.521683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:41.521695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:41.521707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:41.521723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:41.521737 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:41.521760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:46.510328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:46.510426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:46.510439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:46.510530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:46.510539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:23:46.510548 | orchestrator | 2026-04-17 05:23:46.510558 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-17 05:23:46.510568 | orchestrator | Friday 17 April 2026 05:23:43 +0000 (0:00:04.660) 0:01:14.000 ********** 2026-04-17 05:23:46.510577 | orchestrator | changed: [testbed-manager] => { 2026-04-17 05:23:46.510586 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:23:46.510594 | orchestrator | } 2026-04-17 05:23:46.510602 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:23:46.510610 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:23:46.510618 | orchestrator | } 2026-04-17 05:23:46.510626 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:23:46.510633 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:23:46.510641 | orchestrator | } 2026-04-17 05:23:46.510649 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:23:46.510657 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:23:46.510665 | orchestrator | } 2026-04-17 05:23:46.510673 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 05:23:46.510680 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:23:46.510688 | orchestrator | } 2026-04-17 05:23:46.510696 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 05:23:46.510704 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:23:46.510712 | orchestrator | } 2026-04-17 05:23:46.510720 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 05:23:46.510728 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:23:46.510736 | orchestrator | } 2026-04-17 05:23:46.510744 | orchestrator | 2026-04-17 05:23:46.510773 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:23:46.510795 | orchestrator | Friday 17 April 2026 05:23:45 +0000 (0:00:02.108) 0:01:16.109 ********** 2026-04-17 05:23:46.510831 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:23:46.510859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:46.510869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:46.510880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:23:46.510890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:46.510899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:46.510909 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:23:46.510918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:23:46.510938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:46.510949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:46.510958 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:23:46.510974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:23:54.293182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:54.293319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:54.293342 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:23:54.293357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:23:54.293369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:54.293414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:54.293427 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:23:54.293438 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:23:54.293449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:23:54.293526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:54.293559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:54.293572 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:23:54.293583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:23:54.293594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:54.293606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:23:54.293626 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:23:54.293637 | orchestrator | 2026-04-17 05:23:54.293649 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:23:54.293661 | orchestrator | Friday 17 April 2026 05:23:49 +0000 (0:00:03.423) 0:01:19.533 ********** 2026-04-17 05:23:54.293672 | orchestrator | 2026-04-17 05:23:54.293683 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:23:54.293696 | orchestrator | Friday 17 April 2026 05:23:49 +0000 (0:00:00.476) 0:01:20.009 ********** 2026-04-17 05:23:54.293708 | orchestrator | 2026-04-17 05:23:54.293720 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:23:54.293732 | orchestrator | Friday 17 April 2026 05:23:50 +0000 (0:00:00.448) 0:01:20.458 ********** 2026-04-17 05:23:54.293745 | orchestrator | 2026-04-17 05:23:54.293757 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:23:54.293774 | orchestrator | Friday 17 April 2026 05:23:50 +0000 (0:00:00.494) 0:01:20.952 ********** 2026-04-17 05:23:54.293786 | orchestrator | 2026-04-17 05:23:54.293799 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:23:54.293812 | orchestrator | Friday 17 April 2026 05:23:50 +0000 (0:00:00.462) 0:01:21.415 ********** 2026-04-17 05:23:54.293824 | orchestrator | 2026-04-17 05:23:54.293834 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:23:54.293845 | orchestrator | Friday 17 April 2026 05:23:51 +0000 (0:00:00.485) 0:01:21.900 ********** 2026-04-17 05:23:54.293856 | orchestrator | 2026-04-17 05:23:54.293866 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:23:54.293877 | orchestrator | Friday 17 April 2026 05:23:51 +0000 (0:00:00.519) 0:01:22.420 ********** 2026-04-17 05:23:54.293888 | orchestrator | 2026-04-17 05:23:54.293898 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-17 05:23:54.293909 | orchestrator | Friday 17 April 2026 05:23:52 +0000 (0:00:00.860) 0:01:23.280 ********** 2026-04-17 05:23:54.293934 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_j800i4ev/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_j800i4ev/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_j800i4ev/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-17 05:23:56.390359 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_iefg474w/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_iefg474w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_iefg474w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-17 05:23:56.390550 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_n71r441c/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_n71r441c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_n71r441c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-17 05:23:56.390603 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_h_bptmop/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_h_bptmop/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_h_bptmop/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-17 05:23:56.390651 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_4r494owd/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_4r494owd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_4r494owd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-17 05:23:59.963877 | orchestrator | 2026-04-17 05:23:59 | INFO  | Prepare task for execution of common. 2026-04-17 05:23:59.967108 | orchestrator | 2026-04-17 05:23:59 | INFO  | Task 60e8e162-fb2a-4349-b4c3-81c8bcd41735 (common) was prepared for execution. 2026-04-17 05:23:59.967156 | orchestrator | 2026-04-17 05:23:59 | INFO  | It takes a moment until task 60e8e162-fb2a-4349-b4c3-81c8bcd41735 (common) has been started and output is visible here. 2026-04-17 05:24:07.184238 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_vuhp33uo/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_vuhp33uo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_vuhp33uo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-17 05:24:07.184459 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_e9j39569/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_e9j39569/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_e9j39569/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-17 05:24:07.184581 | orchestrator | 2026-04-17 05:24:07.184610 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:24:07.184633 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-17 05:24:07.184653 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-17 05:24:07.184672 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-17 05:24:07.184693 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-17 05:24:07.184715 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-17 05:24:07.184735 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-17 05:24:07.184752 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-17 05:24:07.184779 | orchestrator | 2026-04-17 05:24:07.184792 | orchestrator | 2026-04-17 05:24:07.184805 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:24:07.184818 | orchestrator | Friday 17 April 2026 05:23:59 +0000 (0:00:06.683) 0:01:29.963 ********** 2026-04-17 05:24:07.184831 | orchestrator | =============================================================================== 2026-04-17 05:24:07.184844 | orchestrator | common : Restart fluentd container -------------------------------------- 6.68s 2026-04-17 05:24:07.184856 | orchestrator | common : Copying over config.json files for services -------------------- 5.11s 2026-04-17 05:24:07.184868 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.04s 2026-04-17 05:24:07.184880 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.02s 2026-04-17 05:24:07.184892 | orchestrator | service-check-containers : common | Check containers -------------------- 4.66s 2026-04-17 05:24:07.184905 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.98s 2026-04-17 05:24:07.184917 | orchestrator | common : include_tasks -------------------------------------------------- 3.78s 2026-04-17 05:24:07.184929 | orchestrator | common : Flush handlers ------------------------------------------------- 3.75s 2026-04-17 05:24:07.184942 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.75s 2026-04-17 05:24:07.184955 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.47s 2026-04-17 05:24:07.184967 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.42s 2026-04-17 05:24:07.184980 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.38s 2026-04-17 05:24:07.184992 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.25s 2026-04-17 05:24:07.185005 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.23s 2026-04-17 05:24:07.185018 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.21s 2026-04-17 05:24:07.185030 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.93s 2026-04-17 05:24:07.185041 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.84s 2026-04-17 05:24:07.185054 | orchestrator | common : include_tasks -------------------------------------------------- 2.81s 2026-04-17 05:24:07.185066 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.51s 2026-04-17 05:24:07.185078 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.42s 2026-04-17 05:24:07.185090 | orchestrator | 2026-04-17 05:24:07.185100 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-17 05:24:07.185111 | orchestrator | 2026-04-17 05:24:07.185121 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-17 05:24:07.185139 | orchestrator | Friday 17 April 2026 05:24:05 +0000 (0:00:01.996) 0:00:01.996 ********** 2026-04-17 05:24:07.185151 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:24:07.185161 | orchestrator | 2026-04-17 05:24:07.185186 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-17 05:24:17.692228 | orchestrator | Friday 17 April 2026 05:24:08 +0000 (0:00:03.368) 0:00:05.364 ********** 2026-04-17 05:24:17.692371 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:24:17.692388 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:24:17.692400 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:24:17.692412 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:24:17.692423 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:24:17.692434 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:24:17.692478 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:24:17.692489 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:24:17.692572 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:24:17.692585 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:24:17.692596 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 05:24:17.692606 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:24:17.692617 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:24:17.692628 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:24:17.692639 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:24:17.692650 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:24:17.692661 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:24:17.692671 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 05:24:17.692682 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:24:17.692693 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:24:17.692703 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 05:24:17.692715 | orchestrator | 2026-04-17 05:24:17.692727 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-17 05:24:17.692740 | orchestrator | Friday 17 April 2026 05:24:12 +0000 (0:00:03.952) 0:00:09.316 ********** 2026-04-17 05:24:17.692754 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:24:17.692769 | orchestrator | 2026-04-17 05:24:17.692782 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-17 05:24:17.692794 | orchestrator | Friday 17 April 2026 05:24:15 +0000 (0:00:02.797) 0:00:12.114 ********** 2026-04-17 05:24:17.692812 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:17.692831 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:17.692863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:17.692908 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:17.692922 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:17.692934 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:17.692950 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:17.692963 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:17.692976 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:17.692994 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:17.693024 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841295 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841428 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841443 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841455 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841467 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841502 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841583 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841615 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841627 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:20.841639 | orchestrator | 2026-04-17 05:24:20.841652 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-17 05:24:20.841664 | orchestrator | Friday 17 April 2026 05:24:20 +0000 (0:00:04.694) 0:00:16.808 ********** 2026-04-17 05:24:20.841677 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:20.841692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:20.841705 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:20.841733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:20.841758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:20.841779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151018 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151141 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:24:23.151153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:23.151164 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:24:23.151173 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:24:23.151183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:23.151237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151246 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:24:23.151270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:23.151280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:23.151298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151307 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:24:23.151323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151355 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:24:23.151364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:23.151373 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:24:23.151381 | orchestrator | 2026-04-17 05:24:23.151391 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-17 05:24:23.151405 | orchestrator | Friday 17 April 2026 05:24:23 +0000 (0:00:02.878) 0:00:19.687 ********** 2026-04-17 05:24:24.594740 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:24.594833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:24.594844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.594872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.594892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:24.594899 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.594907 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:24:24.594915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.594938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.594945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:24.594951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.594961 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:24:24.594967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.594974 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:24:24.594979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:24.594988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.594995 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:24:24.595000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:24.595012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:37.574006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:24:37.574219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:37.574263 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:24:37.574279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:37.574292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:37.574317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:37.574329 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:24:37.574340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:24:37.574352 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:24:37.574363 | orchestrator | 2026-04-17 05:24:37.574374 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-17 05:24:37.574386 | orchestrator | Friday 17 April 2026 05:24:26 +0000 (0:00:03.394) 0:00:23.082 ********** 2026-04-17 05:24:37.574397 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:24:37.574407 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:24:37.574418 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:24:37.574428 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:24:37.574439 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:24:37.574450 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:24:37.574461 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:24:37.574471 | orchestrator | 2026-04-17 05:24:37.574482 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-17 05:24:37.574492 | orchestrator | Friday 17 April 2026 05:24:28 +0000 (0:00:01.961) 0:00:25.044 ********** 2026-04-17 05:24:37.574503 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:24:37.574513 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:24:37.574525 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:24:37.574576 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:24:37.574596 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:24:37.574626 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:24:37.574666 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:24:37.574689 | orchestrator | 2026-04-17 05:24:37.574710 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-17 05:24:37.574730 | orchestrator | Friday 17 April 2026 05:24:31 +0000 (0:00:02.580) 0:00:27.625 ********** 2026-04-17 05:24:37.574750 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:24:37.574769 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:24:37.574790 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:24:37.574811 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:24:37.574832 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:24:37.574852 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:24:37.574873 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:24:37.574894 | orchestrator | 2026-04-17 05:24:37.574913 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-17 05:24:37.574933 | orchestrator | Friday 17 April 2026 05:24:33 +0000 (0:00:02.156) 0:00:29.781 ********** 2026-04-17 05:24:37.574953 | orchestrator | ok: [testbed-manager] 2026-04-17 05:24:37.574973 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:24:37.574993 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:24:37.575013 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:24:37.575032 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:24:37.575052 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:24:37.575073 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:24:37.575094 | orchestrator | 2026-04-17 05:24:37.575108 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-17 05:24:37.575119 | orchestrator | Friday 17 April 2026 05:24:35 +0000 (0:00:02.714) 0:00:32.496 ********** 2026-04-17 05:24:37.575131 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:37.575145 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:37.575157 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:37.575169 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:37.575180 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:37.575224 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:42.325860 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.325969 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:24:42.326469 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326520 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326534 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326603 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326639 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326651 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326663 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326674 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326690 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326701 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326711 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326730 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326741 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:24:42.326751 | orchestrator | 2026-04-17 05:24:42.326762 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-17 05:24:42.326773 | orchestrator | Friday 17 April 2026 05:24:41 +0000 (0:00:05.232) 0:00:37.729 ********** 2026-04-17 05:24:42.326790 | orchestrator | [WARNING]: Skipped 2026-04-17 05:25:01.309729 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-17 05:25:01.309847 | orchestrator | to this access issue: 2026-04-17 05:25:01.309864 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-17 05:25:01.309876 | orchestrator | directory 2026-04-17 05:25:01.309888 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 05:25:01.309900 | orchestrator | 2026-04-17 05:25:01.309912 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-17 05:25:01.309924 | orchestrator | Friday 17 April 2026 05:24:43 +0000 (0:00:02.591) 0:00:40.320 ********** 2026-04-17 05:25:01.309935 | orchestrator | [WARNING]: Skipped 2026-04-17 05:25:01.309946 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-17 05:25:01.309957 | orchestrator | to this access issue: 2026-04-17 05:25:01.309968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-17 05:25:01.309979 | orchestrator | directory 2026-04-17 05:25:01.309990 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 05:25:01.310001 | orchestrator | 2026-04-17 05:25:01.310012 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-17 05:25:01.310121 | orchestrator | Friday 17 April 2026 05:24:45 +0000 (0:00:02.026) 0:00:42.347 ********** 2026-04-17 05:25:01.310142 | orchestrator | [WARNING]: Skipped 2026-04-17 05:25:01.310161 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-17 05:25:01.310179 | orchestrator | to this access issue: 2026-04-17 05:25:01.310191 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-17 05:25:01.310202 | orchestrator | directory 2026-04-17 05:25:01.310213 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 05:25:01.310224 | orchestrator | 2026-04-17 05:25:01.310236 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-17 05:25:01.310246 | orchestrator | Friday 17 April 2026 05:24:48 +0000 (0:00:02.223) 0:00:44.571 ********** 2026-04-17 05:25:01.310257 | orchestrator | [WARNING]: Skipped 2026-04-17 05:25:01.310268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-17 05:25:01.310279 | orchestrator | to this access issue: 2026-04-17 05:25:01.310290 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-17 05:25:01.310322 | orchestrator | directory 2026-04-17 05:25:01.310333 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 05:25:01.310344 | orchestrator | 2026-04-17 05:25:01.310355 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-17 05:25:01.310366 | orchestrator | Friday 17 April 2026 05:24:50 +0000 (0:00:02.126) 0:00:46.698 ********** 2026-04-17 05:25:01.310376 | orchestrator | ok: [testbed-manager] 2026-04-17 05:25:01.310387 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:25:01.310398 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:25:01.310417 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:25:01.310428 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:25:01.310439 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:25:01.310449 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:25:01.310460 | orchestrator | 2026-04-17 05:25:01.310470 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-17 05:25:01.310481 | orchestrator | Friday 17 April 2026 05:24:53 +0000 (0:00:03.806) 0:00:50.504 ********** 2026-04-17 05:25:01.310492 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:25:01.310504 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:25:01.310515 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:25:01.310525 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:25:01.310536 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:25:01.310546 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:25:01.310557 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 05:25:01.310568 | orchestrator | 2026-04-17 05:25:01.310608 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-17 05:25:01.310629 | orchestrator | Friday 17 April 2026 05:24:57 +0000 (0:00:03.316) 0:00:53.820 ********** 2026-04-17 05:25:01.310644 | orchestrator | ok: [testbed-manager] 2026-04-17 05:25:01.310655 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:25:01.310666 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:25:01.310676 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:25:01.310687 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:25:01.310698 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:25:01.310708 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:25:01.310719 | orchestrator | 2026-04-17 05:25:01.310730 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-17 05:25:01.310741 | orchestrator | Friday 17 April 2026 05:25:00 +0000 (0:00:03.162) 0:00:56.982 ********** 2026-04-17 05:25:01.310756 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:01.310792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:01.310814 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:01.310826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:01.310843 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:01.310854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:01.310867 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:01.310879 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:01.310899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:11.078135 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:11.078298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:11.078347 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:11.078393 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:11.078415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:11.078436 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:11.078448 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:11.078510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:11.078525 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:11.078537 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:11.078559 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:11.078579 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:11.078625 | orchestrator | 2026-04-17 05:25:11.078648 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-17 05:25:11.078670 | orchestrator | Friday 17 April 2026 05:25:03 +0000 (0:00:02.873) 0:00:59.856 ********** 2026-04-17 05:25:11.078690 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:25:11.078709 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:25:11.078721 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:25:11.078731 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:25:11.078742 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:25:11.078752 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:25:11.078762 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 05:25:11.078773 | orchestrator | 2026-04-17 05:25:11.078783 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-17 05:25:11.078794 | orchestrator | Friday 17 April 2026 05:25:06 +0000 (0:00:03.137) 0:01:02.993 ********** 2026-04-17 05:25:11.078804 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:25:11.078815 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:25:11.078835 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:25:11.078846 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:25:11.078856 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:25:11.078867 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:25:11.078877 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 05:25:11.078888 | orchestrator | 2026-04-17 05:25:11.078898 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-17 05:25:11.078909 | orchestrator | Friday 17 April 2026 05:25:09 +0000 (0:00:03.411) 0:01:06.405 ********** 2026-04-17 05:25:11.078930 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:12.391240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:12.391349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:12.391383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:12.391396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:12.391409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:12.391453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 05:25:12.391467 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:12.391498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:12.391511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:12.391522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:12.391534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:12.391552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:12.391701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:12.391738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:12.391776 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:17.283536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:17.283691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:17.283710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:17.283723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:17.283754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:25:17.283767 | orchestrator | 2026-04-17 05:25:17.283780 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-17 05:25:17.283793 | orchestrator | Friday 17 April 2026 05:25:14 +0000 (0:00:04.466) 0:01:10.871 ********** 2026-04-17 05:25:17.283804 | orchestrator | changed: [testbed-manager] => { 2026-04-17 05:25:17.283816 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:25:17.283827 | orchestrator | } 2026-04-17 05:25:17.283838 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:25:17.283849 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:25:17.283859 | orchestrator | } 2026-04-17 05:25:17.283870 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:25:17.283881 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:25:17.283892 | orchestrator | } 2026-04-17 05:25:17.283903 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:25:17.283913 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:25:17.283924 | orchestrator | } 2026-04-17 05:25:17.283935 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 05:25:17.283945 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:25:17.283956 | orchestrator | } 2026-04-17 05:25:17.283966 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 05:25:17.283977 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:25:17.283987 | orchestrator | } 2026-04-17 05:25:17.283998 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 05:25:17.284008 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:25:17.284019 | orchestrator | } 2026-04-17 05:25:17.284030 | orchestrator | 2026-04-17 05:25:17.284041 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:25:17.284052 | orchestrator | Friday 17 April 2026 05:25:16 +0000 (0:00:02.096) 0:01:12.968 ********** 2026-04-17 05:25:17.284065 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:25:17.284106 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:17.284122 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:17.284143 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:25:17.284161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:25:17.284176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:17.284189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:17.284203 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:25:17.284215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:25:17.284228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:17.284241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:25:17.284263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:26:39.192311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:26:39.192426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:26:39.192443 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:26:39.192456 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:26:39.192467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:26:39.192477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:26:39.192488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:26:39.192498 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:26:39.192508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:26:39.192519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:26:39.192570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:26:39.192583 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:26:39.192593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 05:26:39.192603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:26:39.192613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:26:39.192623 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:26:39.192633 | orchestrator | 2026-04-17 05:26:39.192643 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:26:39.192653 | orchestrator | Friday 17 April 2026 05:25:19 +0000 (0:00:03.394) 0:01:16.362 ********** 2026-04-17 05:26:39.192663 | orchestrator | 2026-04-17 05:26:39.192673 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:26:39.192682 | orchestrator | Friday 17 April 2026 05:25:20 +0000 (0:00:00.487) 0:01:16.850 ********** 2026-04-17 05:26:39.192692 | orchestrator | 2026-04-17 05:26:39.192702 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:26:39.192712 | orchestrator | Friday 17 April 2026 05:25:20 +0000 (0:00:00.470) 0:01:17.321 ********** 2026-04-17 05:26:39.192721 | orchestrator | 2026-04-17 05:26:39.192731 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:26:39.192814 | orchestrator | Friday 17 April 2026 05:25:21 +0000 (0:00:00.470) 0:01:17.792 ********** 2026-04-17 05:26:39.192836 | orchestrator | 2026-04-17 05:26:39.192852 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:26:39.192867 | orchestrator | Friday 17 April 2026 05:25:21 +0000 (0:00:00.441) 0:01:18.233 ********** 2026-04-17 05:26:39.192878 | orchestrator | 2026-04-17 05:26:39.192889 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:26:39.192900 | orchestrator | Friday 17 April 2026 05:25:22 +0000 (0:00:00.443) 0:01:18.677 ********** 2026-04-17 05:26:39.192912 | orchestrator | 2026-04-17 05:26:39.192923 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 05:26:39.192944 | orchestrator | Friday 17 April 2026 05:25:22 +0000 (0:00:00.442) 0:01:19.119 ********** 2026-04-17 05:26:39.192955 | orchestrator | 2026-04-17 05:26:39.192966 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-17 05:26:39.192976 | orchestrator | Friday 17 April 2026 05:25:23 +0000 (0:00:00.908) 0:01:20.027 ********** 2026-04-17 05:26:39.192988 | orchestrator | changed: [testbed-manager] 2026-04-17 05:26:39.192999 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:26:39.193010 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:26:39.193021 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:26:39.193032 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:26:39.193043 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:26:39.193053 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:26:39.193064 | orchestrator | 2026-04-17 05:26:39.193076 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-17 05:26:39.193087 | orchestrator | Friday 17 April 2026 05:26:03 +0000 (0:00:40.115) 0:02:00.143 ********** 2026-04-17 05:26:39.193098 | orchestrator | changed: [testbed-manager] 2026-04-17 05:26:39.193109 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:26:39.193120 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:26:39.193131 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:26:39.193142 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:26:39.193153 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:26:39.193164 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:26:39.193175 | orchestrator | 2026-04-17 05:26:39.193197 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-17 05:26:56.181343 | orchestrator | Friday 17 April 2026 05:26:39 +0000 (0:00:36.383) 0:02:36.526 ********** 2026-04-17 05:26:56.181468 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:26:56.181494 | orchestrator | ok: [testbed-manager] 2026-04-17 05:26:56.181513 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:26:56.181531 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:26:56.181550 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:26:56.181567 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:26:56.181585 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:26:56.181601 | orchestrator | 2026-04-17 05:26:56.181640 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-17 05:26:56.181659 | orchestrator | Friday 17 April 2026 05:26:43 +0000 (0:00:03.143) 0:02:39.670 ********** 2026-04-17 05:26:56.181677 | orchestrator | changed: [testbed-manager] 2026-04-17 05:26:56.181696 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:26:56.181713 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:26:56.181730 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:26:56.181749 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:26:56.181767 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:26:56.181818 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:26:56.181835 | orchestrator | 2026-04-17 05:26:56.181851 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:26:56.181869 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:26:56.181889 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:26:56.181909 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:26:56.181928 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:26:56.181947 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:26:56.181999 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:26:56.182105 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:26:56.182130 | orchestrator | 2026-04-17 05:26:56.182149 | orchestrator | 2026-04-17 05:26:56.182168 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:26:56.182187 | orchestrator | Friday 17 April 2026 05:26:55 +0000 (0:00:12.529) 0:02:52.200 ********** 2026-04-17 05:26:56.182198 | orchestrator | =============================================================================== 2026-04-17 05:26:56.182209 | orchestrator | common : Restart fluentd container ------------------------------------- 40.12s 2026-04-17 05:26:56.182220 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.38s 2026-04-17 05:26:56.182230 | orchestrator | common : Restart cron container ---------------------------------------- 12.53s 2026-04-17 05:26:56.182241 | orchestrator | common : Copying over config.json files for services -------------------- 5.23s 2026-04-17 05:26:56.182251 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.69s 2026-04-17 05:26:56.182262 | orchestrator | service-check-containers : common | Check containers -------------------- 4.47s 2026-04-17 05:26:56.182273 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.95s 2026-04-17 05:26:56.182283 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.81s 2026-04-17 05:26:56.182294 | orchestrator | common : Flush handlers ------------------------------------------------- 3.66s 2026-04-17 05:26:56.182305 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.41s 2026-04-17 05:26:56.182315 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.40s 2026-04-17 05:26:56.182326 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.39s 2026-04-17 05:26:56.182336 | orchestrator | common : include_tasks -------------------------------------------------- 3.37s 2026-04-17 05:26:56.182347 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.32s 2026-04-17 05:26:56.182358 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.16s 2026-04-17 05:26:56.182369 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.14s 2026-04-17 05:26:56.182380 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.14s 2026-04-17 05:26:56.182390 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.88s 2026-04-17 05:26:56.182402 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.87s 2026-04-17 05:26:56.182413 | orchestrator | common : include_tasks -------------------------------------------------- 2.80s 2026-04-17 05:26:56.415673 | orchestrator | + osism apply -a upgrade loadbalancer 2026-04-17 05:26:57.840388 | orchestrator | 2026-04-17 05:26:57 | INFO  | Prepare task for execution of loadbalancer. 2026-04-17 05:26:57.913549 | orchestrator | 2026-04-17 05:26:57 | INFO  | Task 6dfd5113-3a4a-48c2-993e-3e72bf640cce (loadbalancer) was prepared for execution. 2026-04-17 05:26:57.913645 | orchestrator | 2026-04-17 05:26:57 | INFO  | It takes a moment until task 6dfd5113-3a4a-48c2-993e-3e72bf640cce (loadbalancer) has been started and output is visible here. 2026-04-17 05:27:19.017477 | orchestrator | 2026-04-17 05:27:19.017591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:27:19.017624 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 05:27:19.017637 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 05:27:19.017660 | orchestrator | 2026-04-17 05:27:19.017671 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:27:19.017704 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 05:27:19.017715 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 05:27:19.017737 | orchestrator | Friday 17 April 2026 05:27:03 +0000 (0:00:01.495) 0:00:01.496 ********** 2026-04-17 05:27:19.017748 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:27:19.017760 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:27:19.017771 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:27:19.017782 | orchestrator | 2026-04-17 05:27:19.017792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:27:19.017803 | orchestrator | Friday 17 April 2026 05:27:03 +0000 (0:00:00.763) 0:00:02.259 ********** 2026-04-17 05:27:19.017882 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-17 05:27:19.017893 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-17 05:27:19.017904 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-17 05:27:19.017914 | orchestrator | 2026-04-17 05:27:19.017925 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-17 05:27:19.017935 | orchestrator | 2026-04-17 05:27:19.017947 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-17 05:27:19.017958 | orchestrator | Friday 17 April 2026 05:27:04 +0000 (0:00:00.823) 0:00:03.082 ********** 2026-04-17 05:27:19.017969 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:27:19.017980 | orchestrator | 2026-04-17 05:27:19.017991 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-04-17 05:27:19.018004 | orchestrator | Friday 17 April 2026 05:27:05 +0000 (0:00:01.276) 0:00:04.359 ********** 2026-04-17 05:27:19.018069 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:27:19.018083 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:27:19.018096 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:27:19.018107 | orchestrator | 2026-04-17 05:27:19.018120 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-04-17 05:27:19.018133 | orchestrator | Friday 17 April 2026 05:27:07 +0000 (0:00:01.408) 0:00:05.767 ********** 2026-04-17 05:27:19.018145 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:27:19.018158 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:27:19.018170 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:27:19.018181 | orchestrator | 2026-04-17 05:27:19.018194 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-17 05:27:19.018207 | orchestrator | Friday 17 April 2026 05:27:08 +0000 (0:00:01.087) 0:00:06.855 ********** 2026-04-17 05:27:19.018220 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:27:19.018232 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:27:19.018244 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:27:19.018257 | orchestrator | 2026-04-17 05:27:19.018269 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-17 05:27:19.018282 | orchestrator | Friday 17 April 2026 05:27:09 +0000 (0:00:00.861) 0:00:07.717 ********** 2026-04-17 05:27:19.018295 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:27:19.018307 | orchestrator | 2026-04-17 05:27:19.018319 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-17 05:27:19.018332 | orchestrator | Friday 17 April 2026 05:27:10 +0000 (0:00:01.016) 0:00:08.733 ********** 2026-04-17 05:27:19.018343 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:27:19.018356 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:27:19.018368 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:27:19.018379 | orchestrator | 2026-04-17 05:27:19.018390 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-17 05:27:19.018400 | orchestrator | Friday 17 April 2026 05:27:10 +0000 (0:00:00.640) 0:00:09.374 ********** 2026-04-17 05:27:19.018420 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 05:27:19.018431 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 05:27:19.018442 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 05:27:19.018452 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 05:27:19.018463 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 05:27:19.018474 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 05:27:19.018485 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 05:27:19.018496 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 05:27:19.018506 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 05:27:19.018517 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 05:27:19.018527 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 05:27:19.018555 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 05:27:19.018567 | orchestrator | 2026-04-17 05:27:19.018578 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 05:27:19.018595 | orchestrator | Friday 17 April 2026 05:27:14 +0000 (0:00:03.793) 0:00:13.168 ********** 2026-04-17 05:27:19.018607 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-17 05:27:19.018618 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-17 05:27:19.018629 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-17 05:27:19.018640 | orchestrator | 2026-04-17 05:27:19.018651 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 05:27:19.018662 | orchestrator | Friday 17 April 2026 05:27:15 +0000 (0:00:00.734) 0:00:13.902 ********** 2026-04-17 05:27:19.018673 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-17 05:27:19.018683 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-17 05:27:19.018694 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-17 05:27:19.018705 | orchestrator | 2026-04-17 05:27:19.018715 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 05:27:19.018726 | orchestrator | Friday 17 April 2026 05:27:16 +0000 (0:00:01.192) 0:00:15.094 ********** 2026-04-17 05:27:19.018737 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-17 05:27:19.018748 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:27:19.018759 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-17 05:27:19.018770 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:27:19.018781 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-17 05:27:19.018791 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:27:19.018802 | orchestrator | 2026-04-17 05:27:19.018834 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-17 05:27:19.018845 | orchestrator | Friday 17 April 2026 05:27:17 +0000 (0:00:01.256) 0:00:16.351 ********** 2026-04-17 05:27:19.018860 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:19.018878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:19.018897 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:19.018909 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:19.018935 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:25.956215 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:25.956329 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:25.956346 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:25.956378 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:25.956391 | orchestrator | 2026-04-17 05:27:25.956404 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-17 05:27:25.956416 | orchestrator | Friday 17 April 2026 05:27:19 +0000 (0:00:01.696) 0:00:18.047 ********** 2026-04-17 05:27:25.956427 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:27:25.956439 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:27:25.956451 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:27:25.956461 | orchestrator | 2026-04-17 05:27:25.956473 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-17 05:27:25.956483 | orchestrator | Friday 17 April 2026 05:27:21 +0000 (0:00:01.519) 0:00:19.567 ********** 2026-04-17 05:27:25.956494 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-04-17 05:27:25.956506 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-04-17 05:27:25.956517 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-04-17 05:27:25.956528 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-04-17 05:27:25.956538 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-04-17 05:27:25.956549 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-04-17 05:27:25.956559 | orchestrator | 2026-04-17 05:27:25.956570 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-17 05:27:25.956581 | orchestrator | Friday 17 April 2026 05:27:22 +0000 (0:00:01.634) 0:00:21.201 ********** 2026-04-17 05:27:25.956591 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:27:25.956602 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:27:25.956613 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:27:25.956623 | orchestrator | 2026-04-17 05:27:25.956634 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-17 05:27:25.956644 | orchestrator | Friday 17 April 2026 05:27:23 +0000 (0:00:00.986) 0:00:22.188 ********** 2026-04-17 05:27:25.956655 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:27:25.956666 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:27:25.956676 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:27:25.956687 | orchestrator | 2026-04-17 05:27:25.956698 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-17 05:27:25.956708 | orchestrator | Friday 17 April 2026 05:27:25 +0000 (0:00:01.469) 0:00:23.658 ********** 2026-04-17 05:27:25.956739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 05:27:25.956799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:27:25.956865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:25.956891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 05:27:25.956911 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:27:25.956932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 05:27:25.956953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:27:25.956982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:25.957018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 05:27:28.772143 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:27:28.772250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 05:27:28.772268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:27:28.772281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:28.772293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 05:27:28.772305 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:27:28.772317 | orchestrator | 2026-04-17 05:27:28.772329 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-17 05:27:28.772341 | orchestrator | Friday 17 April 2026 05:27:26 +0000 (0:00:00.978) 0:00:24.636 ********** 2026-04-17 05:27:28.772367 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:28.772424 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:28.772447 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:28.772467 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:28.772487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:28.772506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 05:27:28.772526 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:28.772566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:28.772599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 05:27:34.542467 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:34.542582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:34.542599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859', '__omit_place_holder__6cc193e5f907a6bb2490a5042940b8ebe4640859'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 05:27:34.542612 | orchestrator | 2026-04-17 05:27:34.542626 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-17 05:27:34.542638 | orchestrator | Friday 17 April 2026 05:27:29 +0000 (0:00:02.964) 0:00:27.602 ********** 2026-04-17 05:27:34.542650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:34.542699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:34.542713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:34.542745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:34.542758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:34.542769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:34.542781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:34.542805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:34.542817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:34.542828 | orchestrator | 2026-04-17 05:27:34.542908 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-17 05:27:34.542920 | orchestrator | Friday 17 April 2026 05:27:32 +0000 (0:00:03.504) 0:00:31.107 ********** 2026-04-17 05:27:34.542930 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 05:27:34.542943 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 05:27:34.542953 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 05:27:34.542964 | orchestrator | 2026-04-17 05:27:34.542975 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-17 05:27:34.542994 | orchestrator | Friday 17 April 2026 05:27:34 +0000 (0:00:01.894) 0:00:33.002 ********** 2026-04-17 05:27:51.547195 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 05:27:51.547314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 05:27:51.547330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 05:27:51.547342 | orchestrator | 2026-04-17 05:27:51.547354 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-17 05:27:51.547365 | orchestrator | Friday 17 April 2026 05:27:37 +0000 (0:00:03.427) 0:00:36.429 ********** 2026-04-17 05:27:51.547376 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:27:51.547388 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:27:51.547400 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:27:51.547411 | orchestrator | 2026-04-17 05:27:51.547422 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-17 05:27:51.547432 | orchestrator | Friday 17 April 2026 05:27:38 +0000 (0:00:00.601) 0:00:37.030 ********** 2026-04-17 05:27:51.547444 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 05:27:51.547455 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 05:27:51.547465 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 05:27:51.547476 | orchestrator | 2026-04-17 05:27:51.547486 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-17 05:27:51.547497 | orchestrator | Friday 17 April 2026 05:27:40 +0000 (0:00:02.202) 0:00:39.233 ********** 2026-04-17 05:27:51.547507 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 05:27:51.547518 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 05:27:51.547555 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 05:27:51.547566 | orchestrator | 2026-04-17 05:27:51.547577 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-17 05:27:51.547588 | orchestrator | Friday 17 April 2026 05:27:42 +0000 (0:00:02.229) 0:00:41.463 ********** 2026-04-17 05:27:51.547599 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:27:51.547610 | orchestrator | 2026-04-17 05:27:51.547620 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-17 05:27:51.547631 | orchestrator | Friday 17 April 2026 05:27:43 +0000 (0:00:00.984) 0:00:42.447 ********** 2026-04-17 05:27:51.547642 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-04-17 05:27:51.547653 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-04-17 05:27:51.547664 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-04-17 05:27:51.547674 | orchestrator | 2026-04-17 05:27:51.547685 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-17 05:27:51.547696 | orchestrator | Friday 17 April 2026 05:27:45 +0000 (0:00:01.694) 0:00:44.142 ********** 2026-04-17 05:27:51.547709 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-17 05:27:51.547721 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-17 05:27:51.547733 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-17 05:27:51.547745 | orchestrator | 2026-04-17 05:27:51.547757 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-17 05:27:51.547769 | orchestrator | Friday 17 April 2026 05:27:47 +0000 (0:00:01.921) 0:00:46.063 ********** 2026-04-17 05:27:51.547781 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:27:51.547794 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:27:51.547820 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:27:51.547833 | orchestrator | 2026-04-17 05:27:51.547845 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-17 05:27:51.547858 | orchestrator | Friday 17 April 2026 05:27:47 +0000 (0:00:00.365) 0:00:46.428 ********** 2026-04-17 05:27:51.547899 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:27:51.547911 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:27:51.547923 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:27:51.547935 | orchestrator | 2026-04-17 05:27:51.547947 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-17 05:27:51.547959 | orchestrator | Friday 17 April 2026 05:27:48 +0000 (0:00:00.722) 0:00:47.151 ********** 2026-04-17 05:27:51.547975 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:51.548011 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:51.548036 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 05:27:51.548049 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:51.548062 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:51.548080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:27:51.548091 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:51.548110 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:53.181999 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:27:53.182232 | orchestrator | 2026-04-17 05:27:53.182248 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-17 05:27:53.182260 | orchestrator | Friday 17 April 2026 05:27:51 +0000 (0:00:03.040) 0:00:50.192 ********** 2026-04-17 05:27:53.182273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 05:27:53.182284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:27:53.182294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:53.182305 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:27:53.182329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 05:27:53.182340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:27:53.182369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:53.182388 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:27:53.182399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 05:27:53.182409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:27:53.182419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:27:53.182428 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:27:53.182438 | orchestrator | 2026-04-17 05:27:53.182448 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-17 05:27:53.182458 | orchestrator | Friday 17 April 2026 05:27:52 +0000 (0:00:01.068) 0:00:51.260 ********** 2026-04-17 05:27:53.182468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 05:27:53.182478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:27:53.182505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:28:00.567667 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:00.567784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 05:28:00.567810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:28:00.567831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 05:28:00.567972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:28:00.568001 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:00.568014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:28:00.568026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:28:00.568057 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:00.568068 | orchestrator | 2026-04-17 05:28:00.568081 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-17 05:28:00.568093 | orchestrator | Friday 17 April 2026 05:27:53 +0000 (0:00:00.961) 0:00:52.221 ********** 2026-04-17 05:28:00.568104 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 05:28:00.568137 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 05:28:00.568149 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 05:28:00.568160 | orchestrator | 2026-04-17 05:28:00.568171 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-17 05:28:00.568181 | orchestrator | Friday 17 April 2026 05:27:55 +0000 (0:00:01.832) 0:00:54.053 ********** 2026-04-17 05:28:00.568192 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 05:28:00.568205 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 05:28:00.568218 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 05:28:00.568230 | orchestrator | 2026-04-17 05:28:00.568242 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-17 05:28:00.568255 | orchestrator | Friday 17 April 2026 05:27:57 +0000 (0:00:01.804) 0:00:55.857 ********** 2026-04-17 05:28:00.568268 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 05:28:00.568281 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 05:28:00.568293 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 05:28:00.568306 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 05:28:00.568319 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:00.568332 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 05:28:00.568344 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:00.568357 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 05:28:00.568369 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:00.568382 | orchestrator | 2026-04-17 05:28:00.568395 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-17 05:28:00.568407 | orchestrator | Friday 17 April 2026 05:27:58 +0000 (0:00:01.277) 0:00:57.135 ********** 2026-04-17 05:28:00.568421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 05:28:00.568448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 05:28:00.568462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 05:28:00.568485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:28:02.798697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:28:02.798810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:28:02.798828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:28:02.798858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:28:02.798984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:28:02.799001 | orchestrator | 2026-04-17 05:28:02.799015 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-17 05:28:02.799027 | orchestrator | Friday 17 April 2026 05:28:01 +0000 (0:00:02.873) 0:01:00.009 ********** 2026-04-17 05:28:02.799039 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:28:02.799051 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:28:02.799062 | orchestrator | } 2026-04-17 05:28:02.799074 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:28:02.799084 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:28:02.799095 | orchestrator | } 2026-04-17 05:28:02.799105 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:28:02.799116 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:28:02.799126 | orchestrator | } 2026-04-17 05:28:02.799137 | orchestrator | 2026-04-17 05:28:02.799149 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:28:02.799160 | orchestrator | Friday 17 April 2026 05:28:02 +0000 (0:00:00.685) 0:01:00.695 ********** 2026-04-17 05:28:02.799190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 05:28:02.799203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:28:02.799215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:28:02.799227 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:02.799253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 05:28:02.799273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:28:02.799287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:28:02.799300 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:02.799313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 05:28:02.799335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:28:08.579230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:28:08.579367 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:08.579388 | orchestrator | 2026-04-17 05:28:08.579401 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-17 05:28:08.580205 | orchestrator | Friday 17 April 2026 05:28:03 +0000 (0:00:01.121) 0:01:01.816 ********** 2026-04-17 05:28:08.580233 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:28:08.580244 | orchestrator | 2026-04-17 05:28:08.580255 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-17 05:28:08.580266 | orchestrator | Friday 17 April 2026 05:28:04 +0000 (0:00:01.300) 0:01:03.116 ********** 2026-04-17 05:28:08.580298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:08.580313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 05:28:08.580325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:08.580339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 05:28:08.580373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:08.580400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 05:28:08.580417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:08.580429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 05:28:08.580441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:08.580460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 05:28:09.590376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:09.590502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 05:28:09.590519 | orchestrator | 2026-04-17 05:28:09.590532 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-17 05:28:09.590544 | orchestrator | Friday 17 April 2026 05:28:08 +0000 (0:00:04.075) 0:01:07.192 ********** 2026-04-17 05:28:09.590572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:09.590589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 05:28:09.590602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:09.590634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 05:28:09.590653 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:09.590666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:09.590678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 05:28:09.590695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:09.590707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 05:28:09.590718 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:09.590729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:09.590755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 05:28:19.458552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:19.458687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 05:28:19.458706 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:19.458720 | orchestrator | 2026-04-17 05:28:19.458750 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-17 05:28:19.458772 | orchestrator | Friday 17 April 2026 05:28:09 +0000 (0:00:01.253) 0:01:08.446 ********** 2026-04-17 05:28:19.458786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:19.458802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:19.458815 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:19.458827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:19.458838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:19.458849 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:19.458860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:19.458871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:19.458904 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:19.458946 | orchestrator | 2026-04-17 05:28:19.458957 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-17 05:28:19.458968 | orchestrator | Friday 17 April 2026 05:28:11 +0000 (0:00:01.445) 0:01:09.891 ********** 2026-04-17 05:28:19.458979 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:28:19.458991 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:28:19.459002 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:28:19.459012 | orchestrator | 2026-04-17 05:28:19.459023 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-17 05:28:19.459034 | orchestrator | Friday 17 April 2026 05:28:12 +0000 (0:00:01.216) 0:01:11.108 ********** 2026-04-17 05:28:19.459045 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:28:19.459055 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:28:19.459066 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:28:19.459076 | orchestrator | 2026-04-17 05:28:19.459087 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-17 05:28:19.459098 | orchestrator | Friday 17 April 2026 05:28:14 +0000 (0:00:02.297) 0:01:13.406 ********** 2026-04-17 05:28:19.459109 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:28:19.459119 | orchestrator | 2026-04-17 05:28:19.459130 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-17 05:28:19.459141 | orchestrator | Friday 17 April 2026 05:28:15 +0000 (0:00:01.024) 0:01:14.431 ********** 2026-04-17 05:28:19.459175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:19.459197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:19.459210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:19.459232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:19.459244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:19.459266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:21.418129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:21.418218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:21.418251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:21.418260 | orchestrator | 2026-04-17 05:28:21.418269 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-17 05:28:21.418279 | orchestrator | Friday 17 April 2026 05:28:20 +0000 (0:00:04.636) 0:01:19.067 ********** 2026-04-17 05:28:21.418288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:21.418310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:21.418355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:21.418364 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:21.418374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:21.418388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:21.418396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:21.418403 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:21.418417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:32.153889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 05:28:32.154148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:32.154206 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:32.154222 | orchestrator | 2026-04-17 05:28:32.154234 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-17 05:28:32.154246 | orchestrator | Friday 17 April 2026 05:28:21 +0000 (0:00:01.187) 0:01:20.255 ********** 2026-04-17 05:28:32.154258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:32.154273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:32.154285 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:32.154296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:32.154307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:32.154318 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:32.154329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:32.154341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:32.154351 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:32.154362 | orchestrator | 2026-04-17 05:28:32.154373 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-17 05:28:32.154385 | orchestrator | Friday 17 April 2026 05:28:22 +0000 (0:00:00.933) 0:01:21.188 ********** 2026-04-17 05:28:32.154398 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:28:32.154411 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:28:32.154423 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:28:32.154435 | orchestrator | 2026-04-17 05:28:32.154448 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-17 05:28:32.154461 | orchestrator | Friday 17 April 2026 05:28:23 +0000 (0:00:01.220) 0:01:22.409 ********** 2026-04-17 05:28:32.154473 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:28:32.154485 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:28:32.154497 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:28:32.154509 | orchestrator | 2026-04-17 05:28:32.154522 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-17 05:28:32.154534 | orchestrator | Friday 17 April 2026 05:28:26 +0000 (0:00:02.182) 0:01:24.592 ********** 2026-04-17 05:28:32.154547 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:32.154559 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:32.154571 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:32.154583 | orchestrator | 2026-04-17 05:28:32.154595 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-17 05:28:32.154633 | orchestrator | Friday 17 April 2026 05:28:26 +0000 (0:00:00.602) 0:01:25.194 ********** 2026-04-17 05:28:32.154647 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:28:32.154659 | orchestrator | 2026-04-17 05:28:32.154672 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-17 05:28:32.154684 | orchestrator | Friday 17 April 2026 05:28:27 +0000 (0:00:00.728) 0:01:25.923 ********** 2026-04-17 05:28:32.154709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 05:28:32.154725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 05:28:32.154738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 05:28:32.154751 | orchestrator | 2026-04-17 05:28:32.154762 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-17 05:28:32.154774 | orchestrator | Friday 17 April 2026 05:28:30 +0000 (0:00:03.234) 0:01:29.157 ********** 2026-04-17 05:28:32.154785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 05:28:32.154803 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:32.154829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 05:28:40.787681 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:40.787796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 05:28:40.787816 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:40.787830 | orchestrator | 2026-04-17 05:28:40.787842 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-17 05:28:40.787854 | orchestrator | Friday 17 April 2026 05:28:32 +0000 (0:00:01.559) 0:01:30.717 ********** 2026-04-17 05:28:40.787867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 05:28:40.787881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 05:28:40.787894 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:40.787905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 05:28:40.787916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 05:28:40.788047 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:40.788062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 05:28:40.788074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 05:28:40.788085 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:40.788096 | orchestrator | 2026-04-17 05:28:40.788107 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-17 05:28:40.788118 | orchestrator | Friday 17 April 2026 05:28:34 +0000 (0:00:01.954) 0:01:32.672 ********** 2026-04-17 05:28:40.788129 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:40.788139 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:40.788151 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:40.788162 | orchestrator | 2026-04-17 05:28:40.788190 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-17 05:28:40.788220 | orchestrator | Friday 17 April 2026 05:28:35 +0000 (0:00:00.840) 0:01:33.513 ********** 2026-04-17 05:28:40.788233 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:40.788246 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:40.788258 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:40.788270 | orchestrator | 2026-04-17 05:28:40.788283 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-17 05:28:40.788296 | orchestrator | Friday 17 April 2026 05:28:36 +0000 (0:00:01.434) 0:01:34.948 ********** 2026-04-17 05:28:40.788308 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:28:40.788320 | orchestrator | 2026-04-17 05:28:40.788333 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-17 05:28:40.788345 | orchestrator | Friday 17 April 2026 05:28:37 +0000 (0:00:00.875) 0:01:35.823 ********** 2026-04-17 05:28:40.788361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:40.788377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:28:40.788400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:40.788414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 05:28:40.788441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.890933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.891088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.891131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.891148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:41.891176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.891209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.891222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.891245 | orchestrator | 2026-04-17 05:28:41.891258 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-17 05:28:41.891270 | orchestrator | Friday 17 April 2026 05:28:41 +0000 (0:00:04.115) 0:01:39.939 ********** 2026-04-17 05:28:41.891283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:41.891295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.891312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 05:28:41.891333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 05:28:42.925108 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:42.925205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:42.925252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:28:42.925262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 05:28:42.925281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 05:28:42.925288 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:42.925309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:42.925317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:28:42.925329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 05:28:42.925335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 05:28:42.925342 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:42.925348 | orchestrator | 2026-04-17 05:28:42.925355 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-17 05:28:42.925363 | orchestrator | Friday 17 April 2026 05:28:42 +0000 (0:00:00.770) 0:01:40.710 ********** 2026-04-17 05:28:42.925370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:42.925379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:42.925387 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:42.925397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:42.925403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:42.925410 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:42.925416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:42.925431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:28:52.718276 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:52.718392 | orchestrator | 2026-04-17 05:28:52.718409 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-17 05:28:52.718422 | orchestrator | Friday 17 April 2026 05:28:43 +0000 (0:00:01.029) 0:01:41.740 ********** 2026-04-17 05:28:52.718433 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:28:52.718445 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:28:52.718456 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:28:52.718467 | orchestrator | 2026-04-17 05:28:52.718478 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-17 05:28:52.718489 | orchestrator | Friday 17 April 2026 05:28:44 +0000 (0:00:01.652) 0:01:43.393 ********** 2026-04-17 05:28:52.718500 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:28:52.718510 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:28:52.718521 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:28:52.718532 | orchestrator | 2026-04-17 05:28:52.718543 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-17 05:28:52.718553 | orchestrator | Friday 17 April 2026 05:28:47 +0000 (0:00:02.199) 0:01:45.592 ********** 2026-04-17 05:28:52.718564 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:52.718575 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:52.718587 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:52.718598 | orchestrator | 2026-04-17 05:28:52.718608 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-17 05:28:52.718619 | orchestrator | Friday 17 April 2026 05:28:47 +0000 (0:00:00.367) 0:01:45.960 ********** 2026-04-17 05:28:52.718630 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:52.718641 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:28:52.718652 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:28:52.718662 | orchestrator | 2026-04-17 05:28:52.718673 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-17 05:28:52.718684 | orchestrator | Friday 17 April 2026 05:28:47 +0000 (0:00:00.316) 0:01:46.277 ********** 2026-04-17 05:28:52.718695 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:28:52.718706 | orchestrator | 2026-04-17 05:28:52.718718 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-17 05:28:52.718729 | orchestrator | Friday 17 April 2026 05:28:48 +0000 (0:00:01.103) 0:01:47.380 ********** 2026-04-17 05:28:52.718746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:52.718778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 05:28:52.718815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 05:28:52.718849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 05:28:52.718864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 05:28:52.718878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:52.718891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 05:28:52.718910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:52.718932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 05:28:52.718953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:28:53.588537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 05:28:53.588594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.588626 | orchestrator | 2026-04-17 05:28:53.588632 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-17 05:28:53.588638 | orchestrator | Friday 17 April 2026 05:28:53 +0000 (0:00:04.125) 0:01:51.505 ********** 2026-04-17 05:28:53.588647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:53.802221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 05:28:53.802333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:28:53.802503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802515 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:28:53.802529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 05:28:53.802549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:28:53.802606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 05:29:05.339295 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:05.339414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:29:05.339475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 05:29:05.339491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 05:29:05.339503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 05:29:05.339514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 05:29:05.339544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:29:05.339557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 05:29:05.339577 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:05.339589 | orchestrator | 2026-04-17 05:29:05.339601 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-17 05:29:05.339612 | orchestrator | Friday 17 April 2026 05:28:54 +0000 (0:00:01.319) 0:01:52.825 ********** 2026-04-17 05:29:05.339624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:05.339638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:05.339650 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:05.339666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:05.339678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:05.339689 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:05.339700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:05.339711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:05.339722 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:05.339733 | orchestrator | 2026-04-17 05:29:05.339744 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-17 05:29:05.339755 | orchestrator | Friday 17 April 2026 05:28:55 +0000 (0:00:01.509) 0:01:54.335 ********** 2026-04-17 05:29:05.339765 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:05.339777 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:05.339787 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:05.339798 | orchestrator | 2026-04-17 05:29:05.339809 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-17 05:29:05.339820 | orchestrator | Friday 17 April 2026 05:28:57 +0000 (0:00:01.256) 0:01:55.591 ********** 2026-04-17 05:29:05.339833 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:05.339846 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:05.339858 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:05.339871 | orchestrator | 2026-04-17 05:29:05.339884 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-17 05:29:05.339896 | orchestrator | Friday 17 April 2026 05:28:59 +0000 (0:00:02.210) 0:01:57.802 ********** 2026-04-17 05:29:05.339909 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:05.339921 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:05.339934 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:05.339946 | orchestrator | 2026-04-17 05:29:05.339959 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-17 05:29:05.339978 | orchestrator | Friday 17 April 2026 05:28:59 +0000 (0:00:00.629) 0:01:58.432 ********** 2026-04-17 05:29:05.340028 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:29:05.340046 | orchestrator | 2026-04-17 05:29:05.340059 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-17 05:29:05.340072 | orchestrator | Friday 17 April 2026 05:29:00 +0000 (0:00:00.844) 0:01:59.276 ********** 2026-04-17 05:29:05.340099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 05:29:05.593939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 05:29:05.594093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 05:29:05.594126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 05:29:05.594132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 05:29:05.594153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 05:29:09.278709 | orchestrator | 2026-04-17 05:29:09.278810 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-17 05:29:09.278821 | orchestrator | Friday 17 April 2026 05:29:05 +0000 (0:00:04.894) 0:02:04.170 ********** 2026-04-17 05:29:09.278841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 05:29:09.278882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 05:29:09.278891 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:09.278912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 05:29:09.278928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 05:29:09.278936 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:09.278948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 05:29:21.749211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 05:29:21.749339 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:21.749361 | orchestrator | 2026-04-17 05:29:21.749376 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-17 05:29:21.749392 | orchestrator | Friday 17 April 2026 05:29:09 +0000 (0:00:03.685) 0:02:07.856 ********** 2026-04-17 05:29:21.749408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 05:29:21.749424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 05:29:21.749465 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:21.749480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 05:29:21.749513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 05:29:21.749522 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:21.749531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 05:29:21.749540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 05:29:21.749548 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:21.749556 | orchestrator | 2026-04-17 05:29:21.749565 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-17 05:29:21.749573 | orchestrator | Friday 17 April 2026 05:29:13 +0000 (0:00:03.977) 0:02:11.834 ********** 2026-04-17 05:29:21.749581 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:21.749589 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:21.749597 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:21.749605 | orchestrator | 2026-04-17 05:29:21.749613 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-17 05:29:21.749621 | orchestrator | Friday 17 April 2026 05:29:14 +0000 (0:00:01.543) 0:02:13.377 ********** 2026-04-17 05:29:21.749628 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:21.749643 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:21.749651 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:21.749661 | orchestrator | 2026-04-17 05:29:21.749671 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-17 05:29:21.749680 | orchestrator | Friday 17 April 2026 05:29:17 +0000 (0:00:02.197) 0:02:15.574 ********** 2026-04-17 05:29:21.749689 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:21.749705 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:21.749714 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:21.749724 | orchestrator | 2026-04-17 05:29:21.749732 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-17 05:29:21.749742 | orchestrator | Friday 17 April 2026 05:29:17 +0000 (0:00:00.350) 0:02:15.925 ********** 2026-04-17 05:29:21.749750 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:29:21.749759 | orchestrator | 2026-04-17 05:29:21.749768 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-17 05:29:21.749777 | orchestrator | Friday 17 April 2026 05:29:18 +0000 (0:00:01.266) 0:02:17.192 ********** 2026-04-17 05:29:21.749788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:29:21.749804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:29:32.265485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:29:32.265623 | orchestrator | 2026-04-17 05:29:32.265643 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-17 05:29:32.265656 | orchestrator | Friday 17 April 2026 05:29:22 +0000 (0:00:03.428) 0:02:20.620 ********** 2026-04-17 05:29:32.265669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:29:32.265726 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:32.265755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:29:32.265768 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:32.265780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:29:32.265791 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:32.265802 | orchestrator | 2026-04-17 05:29:32.265813 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-17 05:29:32.265824 | orchestrator | Friday 17 April 2026 05:29:22 +0000 (0:00:00.429) 0:02:21.050 ********** 2026-04-17 05:29:32.265836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:32.265850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:32.265863 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:32.265899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:32.265911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:32.265923 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:32.265934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:32.265944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:29:32.265955 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:32.265975 | orchestrator | 2026-04-17 05:29:32.265986 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-17 05:29:32.265997 | orchestrator | Friday 17 April 2026 05:29:23 +0000 (0:00:00.951) 0:02:22.001 ********** 2026-04-17 05:29:32.266008 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:32.266093 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:32.266107 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:32.266118 | orchestrator | 2026-04-17 05:29:32.266129 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-17 05:29:32.266139 | orchestrator | Friday 17 April 2026 05:29:24 +0000 (0:00:01.207) 0:02:23.208 ********** 2026-04-17 05:29:32.266150 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:32.266160 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:32.266171 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:32.266181 | orchestrator | 2026-04-17 05:29:32.266192 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-17 05:29:32.266203 | orchestrator | Friday 17 April 2026 05:29:26 +0000 (0:00:02.174) 0:02:25.383 ********** 2026-04-17 05:29:32.266214 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:32.266224 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:32.266235 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:32.266245 | orchestrator | 2026-04-17 05:29:32.266256 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-17 05:29:32.266267 | orchestrator | Friday 17 April 2026 05:29:27 +0000 (0:00:00.345) 0:02:25.728 ********** 2026-04-17 05:29:32.266277 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:29:32.266288 | orchestrator | 2026-04-17 05:29:32.266298 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-17 05:29:32.266309 | orchestrator | Friday 17 April 2026 05:29:28 +0000 (0:00:01.302) 0:02:27.031 ********** 2026-04-17 05:29:32.266367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 05:29:33.210490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 05:29:33.210619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 05:29:33.210661 | orchestrator | 2026-04-17 05:29:33.210676 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-17 05:29:33.210689 | orchestrator | Friday 17 April 2026 05:29:32 +0000 (0:00:04.131) 0:02:31.162 ********** 2026-04-17 05:29:33.210709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 05:29:33.210733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 05:29:39.357915 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:39.358133 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:39.358176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 05:29:39.358191 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:39.358200 | orchestrator | 2026-04-17 05:29:39.358210 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-17 05:29:39.358220 | orchestrator | Friday 17 April 2026 05:29:33 +0000 (0:00:00.988) 0:02:32.151 ********** 2026-04-17 05:29:39.358231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-17 05:29:39.358263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 05:29:39.358277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-17 05:29:39.358293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-17 05:29:39.358329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 05:29:39.358345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 05:29:39.358364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 05:29:39.358374 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:39.358383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-17 05:29:39.358398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 05:29:39.358412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 05:29:39.358426 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:39.358441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-17 05:29:39.358456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 05:29:39.358466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-17 05:29:39.358485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 05:29:39.358495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 05:29:39.358505 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:39.358515 | orchestrator | 2026-04-17 05:29:39.358525 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-17 05:29:39.358535 | orchestrator | Friday 17 April 2026 05:29:35 +0000 (0:00:01.478) 0:02:33.629 ********** 2026-04-17 05:29:39.358545 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:39.358555 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:39.358565 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:39.358575 | orchestrator | 2026-04-17 05:29:39.358584 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-17 05:29:39.358594 | orchestrator | Friday 17 April 2026 05:29:36 +0000 (0:00:01.185) 0:02:34.815 ********** 2026-04-17 05:29:39.358604 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:39.358614 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:39.358623 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:39.358633 | orchestrator | 2026-04-17 05:29:39.358643 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-17 05:29:39.358653 | orchestrator | Friday 17 April 2026 05:29:38 +0000 (0:00:02.298) 0:02:37.114 ********** 2026-04-17 05:29:39.358663 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:39.358672 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:39.358682 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:39.358692 | orchestrator | 2026-04-17 05:29:39.358702 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-17 05:29:39.358719 | orchestrator | Friday 17 April 2026 05:29:39 +0000 (0:00:00.703) 0:02:37.817 ********** 2026-04-17 05:29:45.275023 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:45.275242 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:45.275274 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:45.275295 | orchestrator | 2026-04-17 05:29:45.275315 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-17 05:29:45.275335 | orchestrator | Friday 17 April 2026 05:29:39 +0000 (0:00:00.399) 0:02:38.217 ********** 2026-04-17 05:29:45.275354 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:29:45.275373 | orchestrator | 2026-04-17 05:29:45.275391 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-17 05:29:45.275429 | orchestrator | Friday 17 April 2026 05:29:40 +0000 (0:00:01.054) 0:02:39.272 ********** 2026-04-17 05:29:45.275459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 05:29:45.275520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 05:29:45.275546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 05:29:45.275567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 05:29:45.275625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 05:29:45.275649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 05:29:45.275683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 05:29:45.275703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 05:29:45.275723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 05:29:45.275743 | orchestrator | 2026-04-17 05:29:45.275763 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-17 05:29:45.275783 | orchestrator | Friday 17 April 2026 05:29:44 +0000 (0:00:04.088) 0:02:43.360 ********** 2026-04-17 05:29:45.275822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 05:29:46.874553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 05:29:46.874694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 05:29:46.874713 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:46.874730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 05:29:46.874743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 05:29:46.874755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 05:29:46.874766 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:46.874811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 05:29:46.874833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 05:29:46.874845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 05:29:46.874856 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:46.874867 | orchestrator | 2026-04-17 05:29:46.874879 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-17 05:29:46.874891 | orchestrator | Friday 17 April 2026 05:29:45 +0000 (0:00:00.697) 0:02:44.058 ********** 2026-04-17 05:29:46.874903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-17 05:29:46.874917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-17 05:29:46.874931 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:46.874942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-17 05:29:46.874954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-17 05:29:46.874964 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:46.874976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-17 05:29:46.874999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-17 05:29:46.875010 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:46.875021 | orchestrator | 2026-04-17 05:29:46.875033 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-17 05:29:46.875050 | orchestrator | Friday 17 April 2026 05:29:46 +0000 (0:00:01.273) 0:02:45.332 ********** 2026-04-17 05:29:56.683377 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:56.683522 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:56.683551 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:56.683571 | orchestrator | 2026-04-17 05:29:56.683590 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-17 05:29:56.683609 | orchestrator | Friday 17 April 2026 05:29:48 +0000 (0:00:01.296) 0:02:46.629 ********** 2026-04-17 05:29:56.683628 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:29:56.683646 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:29:56.683665 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:29:56.683683 | orchestrator | 2026-04-17 05:29:56.683702 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-17 05:29:56.683721 | orchestrator | Friday 17 April 2026 05:29:50 +0000 (0:00:02.319) 0:02:48.948 ********** 2026-04-17 05:29:56.683737 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:29:56.683749 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:29:56.683760 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:29:56.683770 | orchestrator | 2026-04-17 05:29:56.683781 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-17 05:29:56.683792 | orchestrator | Friday 17 April 2026 05:29:50 +0000 (0:00:00.401) 0:02:49.350 ********** 2026-04-17 05:29:56.683803 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:29:56.683814 | orchestrator | 2026-04-17 05:29:56.683824 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-17 05:29:56.683835 | orchestrator | Friday 17 April 2026 05:29:52 +0000 (0:00:01.361) 0:02:50.711 ********** 2026-04-17 05:29:56.683852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:29:56.683870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:29:56.683928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:29:56.683967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:29:56.683982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:29:56.683995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:29:56.684008 | orchestrator | 2026-04-17 05:29:56.684020 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-17 05:29:56.684034 | orchestrator | Friday 17 April 2026 05:29:56 +0000 (0:00:03.977) 0:02:54.689 ********** 2026-04-17 05:29:56.684047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:29:56.684138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:30:07.406922 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:07.407077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:30:07.407136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:30:07.407161 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:07.407175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:30:07.407320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:30:07.407349 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:07.407371 | orchestrator | 2026-04-17 05:30:07.407394 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-17 05:30:07.407415 | orchestrator | Friday 17 April 2026 05:29:56 +0000 (0:00:00.704) 0:02:55.393 ********** 2026-04-17 05:30:07.407539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:07.407570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:07.407592 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:07.407612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:07.407631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:07.407646 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:07.407657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:07.407668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:07.407679 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:07.407690 | orchestrator | 2026-04-17 05:30:07.407701 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-17 05:30:07.407712 | orchestrator | Friday 17 April 2026 05:29:58 +0000 (0:00:01.902) 0:02:57.296 ********** 2026-04-17 05:30:07.407732 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:30:07.407751 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:30:07.407787 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:30:07.407808 | orchestrator | 2026-04-17 05:30:07.407827 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-17 05:30:07.407839 | orchestrator | Friday 17 April 2026 05:30:00 +0000 (0:00:01.257) 0:02:58.553 ********** 2026-04-17 05:30:07.407850 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:30:07.407861 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:30:07.407872 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:30:07.407885 | orchestrator | 2026-04-17 05:30:07.407904 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-17 05:30:07.407924 | orchestrator | Friday 17 April 2026 05:30:02 +0000 (0:00:02.266) 0:03:00.820 ********** 2026-04-17 05:30:07.407944 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:30:07.407963 | orchestrator | 2026-04-17 05:30:07.407983 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-17 05:30:07.408001 | orchestrator | Friday 17 April 2026 05:30:03 +0000 (0:00:01.500) 0:03:02.321 ********** 2026-04-17 05:30:07.408024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:30:07.408079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:30:07.408157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 05:30:08.346276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 05:30:08.346386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:30:08.346402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:30:08.346414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 05:30:08.346438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 05:30:08.346468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:30:08.346492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:30:08.346511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 05:30:08.346522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 05:30:08.346535 | orchestrator | 2026-04-17 05:30:08.346547 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-17 05:30:08.346559 | orchestrator | Friday 17 April 2026 05:30:07 +0000 (0:00:04.064) 0:03:06.385 ********** 2026-04-17 05:30:08.346576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:30:08.346595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507676 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:09.507682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:30:09.507686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507756 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:09.507760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:30:09.507764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 05:30:09.507778 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:09.507782 | orchestrator | 2026-04-17 05:30:09.507786 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-17 05:30:09.507791 | orchestrator | Friday 17 April 2026 05:30:08 +0000 (0:00:00.752) 0:03:07.137 ********** 2026-04-17 05:30:09.507795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:09.507800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:09.507808 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:09.507812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:09.507819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:21.539713 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:21.539835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:21.539856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:21.539871 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:21.539883 | orchestrator | 2026-04-17 05:30:21.539895 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-17 05:30:21.539908 | orchestrator | Friday 17 April 2026 05:30:10 +0000 (0:00:01.387) 0:03:08.525 ********** 2026-04-17 05:30:21.539919 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:30:21.539931 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:30:21.539942 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:30:21.539952 | orchestrator | 2026-04-17 05:30:21.539964 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-17 05:30:21.539975 | orchestrator | Friday 17 April 2026 05:30:11 +0000 (0:00:01.239) 0:03:09.764 ********** 2026-04-17 05:30:21.539985 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:30:21.539996 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:30:21.540080 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:30:21.540092 | orchestrator | 2026-04-17 05:30:21.540104 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-17 05:30:21.540115 | orchestrator | Friday 17 April 2026 05:30:13 +0000 (0:00:02.340) 0:03:12.104 ********** 2026-04-17 05:30:21.540126 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:30:21.540137 | orchestrator | 2026-04-17 05:30:21.540148 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-17 05:30:21.540159 | orchestrator | Friday 17 April 2026 05:30:15 +0000 (0:00:01.874) 0:03:13.978 ********** 2026-04-17 05:30:21.540170 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:30:21.540181 | orchestrator | 2026-04-17 05:30:21.540192 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-17 05:30:21.540203 | orchestrator | Friday 17 April 2026 05:30:18 +0000 (0:00:03.356) 0:03:17.335 ********** 2026-04-17 05:30:21.540235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:30:21.540335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 05:30:21.540351 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:21.540364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:30:21.540379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 05:30:21.540400 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:21.540429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:30:24.649403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 05:30:24.649507 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:24.649523 | orchestrator | 2026-04-17 05:30:24.649536 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-17 05:30:24.649548 | orchestrator | Friday 17 April 2026 05:30:21 +0000 (0:00:02.781) 0:03:20.117 ********** 2026-04-17 05:30:24.649580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:30:24.649616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 05:30:24.649629 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:24.649661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:30:24.649675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 05:30:24.649687 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:24.649710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:30:24.649730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 05:30:35.873933 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:35.874172 | orchestrator | 2026-04-17 05:30:35.874203 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-17 05:30:35.874216 | orchestrator | Friday 17 April 2026 05:30:25 +0000 (0:00:03.404) 0:03:23.521 ********** 2026-04-17 05:30:35.874230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 05:30:35.874248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 05:30:35.874288 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:35.874301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 05:30:35.874328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 05:30:35.874339 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:35.874351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 05:30:35.874366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 05:30:35.874379 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:35.874392 | orchestrator | 2026-04-17 05:30:35.874404 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-17 05:30:35.874417 | orchestrator | Friday 17 April 2026 05:30:28 +0000 (0:00:02.964) 0:03:26.485 ********** 2026-04-17 05:30:35.874429 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:30:35.874463 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:30:35.874476 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:30:35.874494 | orchestrator | 2026-04-17 05:30:35.874512 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-17 05:30:35.874531 | orchestrator | Friday 17 April 2026 05:30:30 +0000 (0:00:02.121) 0:03:28.607 ********** 2026-04-17 05:30:35.874550 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:35.874569 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:35.874587 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:35.874604 | orchestrator | 2026-04-17 05:30:35.874622 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-17 05:30:35.874639 | orchestrator | Friday 17 April 2026 05:30:32 +0000 (0:00:01.899) 0:03:30.507 ********** 2026-04-17 05:30:35.874658 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:35.874677 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:35.874696 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:35.874715 | orchestrator | 2026-04-17 05:30:35.874747 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-17 05:30:35.874767 | orchestrator | Friday 17 April 2026 05:30:32 +0000 (0:00:00.639) 0:03:31.146 ********** 2026-04-17 05:30:35.874815 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:30:35.874835 | orchestrator | 2026-04-17 05:30:35.874854 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-17 05:30:35.874872 | orchestrator | Friday 17 April 2026 05:30:33 +0000 (0:00:01.196) 0:03:32.342 ********** 2026-04-17 05:30:35.874892 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 05:30:35.874985 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 05:30:35.875011 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 05:30:35.875031 | orchestrator | 2026-04-17 05:30:35.875050 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-17 05:30:35.875070 | orchestrator | Friday 17 April 2026 05:30:35 +0000 (0:00:01.864) 0:03:34.207 ********** 2026-04-17 05:30:35.875105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 05:30:45.670197 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:45.670326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 05:30:45.670341 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:45.670350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 05:30:45.670358 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:45.670367 | orchestrator | 2026-04-17 05:30:45.670375 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-17 05:30:45.670385 | orchestrator | Friday 17 April 2026 05:30:36 +0000 (0:00:00.441) 0:03:34.649 ********** 2026-04-17 05:30:45.670395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 05:30:45.670405 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:45.670425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 05:30:45.670433 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:45.670441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 05:30:45.670449 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:45.670457 | orchestrator | 2026-04-17 05:30:45.670465 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-17 05:30:45.670473 | orchestrator | Friday 17 April 2026 05:30:36 +0000 (0:00:00.667) 0:03:35.316 ********** 2026-04-17 05:30:45.670481 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:45.670489 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:45.670497 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:45.670505 | orchestrator | 2026-04-17 05:30:45.670513 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-17 05:30:45.670521 | orchestrator | Friday 17 April 2026 05:30:37 +0000 (0:00:00.883) 0:03:36.199 ********** 2026-04-17 05:30:45.670529 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:45.670536 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:45.670544 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:45.670552 | orchestrator | 2026-04-17 05:30:45.670610 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-17 05:30:45.670660 | orchestrator | Friday 17 April 2026 05:30:39 +0000 (0:00:01.589) 0:03:37.789 ********** 2026-04-17 05:30:45.670670 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:45.670678 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:45.670686 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:45.670693 | orchestrator | 2026-04-17 05:30:45.670701 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-17 05:30:45.670709 | orchestrator | Friday 17 April 2026 05:30:39 +0000 (0:00:00.363) 0:03:38.152 ********** 2026-04-17 05:30:45.670717 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:30:45.670725 | orchestrator | 2026-04-17 05:30:45.670733 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-17 05:30:45.670741 | orchestrator | Friday 17 April 2026 05:30:41 +0000 (0:00:01.698) 0:03:39.851 ********** 2026-04-17 05:30:45.670767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:30:45.670781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:45.670797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-17 05:30:45.670808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-17 05:30:45.670832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:30:45.789160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:45.789270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:45.789287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:45.789302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-17 05:30:45.789339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:45.789369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-17 05:30:45.789382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 05:30:45.789400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:45.789413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:45.789432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:45.789443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:45.789510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-17 05:30:45.909220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:45.909312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:45.909344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 05:30:45.909378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:45.909392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:45.909423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 05:30:45.909438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:45.909455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:45.909490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-17 05:30:45.909503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:45.909514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:45.909535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 05:30:46.090397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:30:46.090581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:46.090612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:46.090711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-17 05:30:46.090762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-17 05:30:46.090796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:46.090834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:46.090856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:46.090877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 05:30:46.090899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:46.090932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.311193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-17 05:30:47.311323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:47.311339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.311353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 05:30:47.311366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:47.311377 | orchestrator | 2026-04-17 05:30:47.311388 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-17 05:30:47.311399 | orchestrator | Friday 17 April 2026 05:30:46 +0000 (0:00:04.910) 0:03:44.761 ********** 2026-04-17 05:30:47.311427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:30:47.311448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.311460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-17 05:30:47.311471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-17 05:30:47.311552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.735739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:47.735882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:47.735901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 05:30:47.735914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:47.735927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.735939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-17 05:30:47.735972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:47.735997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.736012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 05:30:47.736025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:47.736038 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:47.736052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:30:47.736071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.897417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-17 05:30:47.897518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-17 05:30:47.897535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.897549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:47.897562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:47.897678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 05:30:47.897701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:47.897713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.897726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-17 05:30:47.897738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:47.897749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:47.897781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 05:30:48.102144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:48.102219 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:48.102228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:30:48.102236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:48.102260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-17 05:30:48.102288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-17 05:30:48.102295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:48.102301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:48.102308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:48.102314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 05:30:48.102323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:48.102342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:59.122571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-17 05:30:59.122688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-17 05:30:59.122706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 05:30:59.122722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 05:30:59.122804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 05:30:59.122830 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:59.122853 | orchestrator | 2026-04-17 05:30:59.122871 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-17 05:30:59.122899 | orchestrator | Friday 17 April 2026 05:30:48 +0000 (0:00:02.000) 0:03:46.762 ********** 2026-04-17 05:30:59.122912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:59.122945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:59.122959 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:30:59.122970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:59.122982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:59.122993 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:30:59.123004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:59.123014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:30:59.123025 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:30:59.123036 | orchestrator | 2026-04-17 05:30:59.123047 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-17 05:30:59.123058 | orchestrator | Friday 17 April 2026 05:30:50 +0000 (0:00:02.228) 0:03:48.990 ********** 2026-04-17 05:30:59.123078 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:30:59.123093 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:30:59.123105 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:30:59.123117 | orchestrator | 2026-04-17 05:30:59.123130 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-17 05:30:59.123142 | orchestrator | Friday 17 April 2026 05:30:51 +0000 (0:00:01.218) 0:03:50.209 ********** 2026-04-17 05:30:59.123155 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:30:59.123166 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:30:59.123178 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:30:59.123190 | orchestrator | 2026-04-17 05:30:59.123203 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-17 05:30:59.123213 | orchestrator | Friday 17 April 2026 05:30:53 +0000 (0:00:02.226) 0:03:52.436 ********** 2026-04-17 05:30:59.123224 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:30:59.123235 | orchestrator | 2026-04-17 05:30:59.123245 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-17 05:30:59.123256 | orchestrator | Friday 17 April 2026 05:30:55 +0000 (0:00:01.599) 0:03:54.035 ********** 2026-04-17 05:30:59.123268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 05:30:59.123295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 05:31:10.439411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 05:31:10.439547 | orchestrator | 2026-04-17 05:31:10.439564 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-17 05:31:10.439576 | orchestrator | Friday 17 April 2026 05:30:59 +0000 (0:00:03.819) 0:03:57.855 ********** 2026-04-17 05:31:10.439588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 05:31:10.439599 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:31:10.439610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 05:31:10.439621 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:31:10.439663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 05:31:10.439675 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:31:10.439693 | orchestrator | 2026-04-17 05:31:10.439703 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-17 05:31:10.439713 | orchestrator | Friday 17 April 2026 05:30:59 +0000 (0:00:00.582) 0:03:58.438 ********** 2026-04-17 05:31:10.439724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:31:10.439737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:31:10.439748 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:31:10.439758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:31:10.439768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:31:10.439777 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:31:10.439787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:31:10.439797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:31:10.439807 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:31:10.439816 | orchestrator | 2026-04-17 05:31:10.439826 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-17 05:31:10.439836 | orchestrator | Friday 17 April 2026 05:31:01 +0000 (0:00:01.498) 0:03:59.936 ********** 2026-04-17 05:31:10.439845 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:31:10.439856 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:31:10.439865 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:31:10.439874 | orchestrator | 2026-04-17 05:31:10.439884 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-17 05:31:10.439893 | orchestrator | Friday 17 April 2026 05:31:02 +0000 (0:00:01.230) 0:04:01.167 ********** 2026-04-17 05:31:10.439904 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:31:10.439915 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:31:10.439925 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:31:10.439936 | orchestrator | 2026-04-17 05:31:10.439947 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-17 05:31:10.439958 | orchestrator | Friday 17 April 2026 05:31:04 +0000 (0:00:02.230) 0:04:03.397 ********** 2026-04-17 05:31:10.439969 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:31:10.439981 | orchestrator | 2026-04-17 05:31:10.439991 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-17 05:31:10.440003 | orchestrator | Friday 17 April 2026 05:31:06 +0000 (0:00:01.711) 0:04:05.109 ********** 2026-04-17 05:31:10.440028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:31:12.533961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:31:12.534128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:31:12.534148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:31:12.534277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:31:12.534299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:31:12.534312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:31:12.534325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:31:12.534337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:31:12.534363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:31:12.534385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:31:13.687376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:31:13.687479 | orchestrator | 2026-04-17 05:31:13.687497 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-17 05:31:13.687511 | orchestrator | Friday 17 April 2026 05:31:12 +0000 (0:00:05.993) 0:04:11.102 ********** 2026-04-17 05:31:13.687526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:31:13.687541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:31:13.687594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:31:13.687627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:31:13.687640 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:31:13.687653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:31:13.687666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:31:13.687684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:31:13.687714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:31:27.101760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:31:27.101874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 05:31:27.101892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:31:27.101930 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:31:27.101945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 05:31:27.101956 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:31:27.101967 | orchestrator | 2026-04-17 05:31:27.101991 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-17 05:31:27.102005 | orchestrator | Friday 17 April 2026 05:31:14 +0000 (0:00:01.455) 0:04:12.558 ********** 2026-04-17 05:31:27.102089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102129 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:31:27.102149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102175 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:31:27.102181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:31:27.102231 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:31:27.102240 | orchestrator | 2026-04-17 05:31:27.102251 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-17 05:31:27.102261 | orchestrator | Friday 17 April 2026 05:31:15 +0000 (0:00:01.306) 0:04:13.865 ********** 2026-04-17 05:31:27.102271 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:31:27.102282 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:31:27.102293 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:31:27.102302 | orchestrator | 2026-04-17 05:31:27.102313 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-17 05:31:27.102324 | orchestrator | Friday 17 April 2026 05:31:16 +0000 (0:00:01.180) 0:04:15.046 ********** 2026-04-17 05:31:27.102335 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:31:27.102346 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:31:27.102357 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:31:27.102368 | orchestrator | 2026-04-17 05:31:27.102379 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-17 05:31:27.102389 | orchestrator | Friday 17 April 2026 05:31:19 +0000 (0:00:02.491) 0:04:17.538 ********** 2026-04-17 05:31:27.102405 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:31:27.102416 | orchestrator | 2026-04-17 05:31:27.102427 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-17 05:31:27.102437 | orchestrator | Friday 17 April 2026 05:31:21 +0000 (0:00:02.172) 0:04:19.710 ********** 2026-04-17 05:31:27.102448 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-17 05:31:27.102461 | orchestrator | 2026-04-17 05:31:27.102470 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-17 05:31:27.102480 | orchestrator | Friday 17 April 2026 05:31:22 +0000 (0:00:01.710) 0:04:21.421 ********** 2026-04-17 05:31:27.102492 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 05:31:27.102517 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 05:31:42.274302 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 05:31:42.274462 | orchestrator | 2026-04-17 05:31:42.274482 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-17 05:31:42.274495 | orchestrator | Friday 17 April 2026 05:31:27 +0000 (0:00:04.434) 0:04:25.856 ********** 2026-04-17 05:31:42.274510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:31:42.274522 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:31:42.274535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:31:42.274546 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:31:42.274557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:31:42.274568 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:31:42.274580 | orchestrator | 2026-04-17 05:31:42.274592 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-17 05:31:42.274603 | orchestrator | Friday 17 April 2026 05:31:29 +0000 (0:00:01.668) 0:04:27.524 ********** 2026-04-17 05:31:42.274629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 05:31:42.274644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 05:31:42.274657 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:31:42.274668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 05:31:42.274679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 05:31:42.274690 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:31:42.274701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 05:31:42.274712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 05:31:42.274752 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:31:42.274764 | orchestrator | 2026-04-17 05:31:42.274776 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 05:31:42.274786 | orchestrator | Friday 17 April 2026 05:31:31 +0000 (0:00:02.831) 0:04:30.356 ********** 2026-04-17 05:31:42.274824 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:31:42.274837 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:31:42.274847 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:31:42.274858 | orchestrator | 2026-04-17 05:31:42.274869 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 05:31:42.274880 | orchestrator | Friday 17 April 2026 05:31:34 +0000 (0:00:02.581) 0:04:32.937 ********** 2026-04-17 05:31:42.274891 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:31:42.274902 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:31:42.274912 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:31:42.274923 | orchestrator | 2026-04-17 05:31:42.274934 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-17 05:31:42.274945 | orchestrator | Friday 17 April 2026 05:31:38 +0000 (0:00:03.649) 0:04:36.587 ********** 2026-04-17 05:31:42.274957 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-17 05:31:42.274969 | orchestrator | 2026-04-17 05:31:42.274980 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-17 05:31:42.274991 | orchestrator | Friday 17 April 2026 05:31:39 +0000 (0:00:01.023) 0:04:37.611 ********** 2026-04-17 05:31:42.275003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:31:42.275015 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:31:42.275026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:31:42.275037 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:31:42.275054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:31:42.275065 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:31:42.275076 | orchestrator | 2026-04-17 05:31:42.275087 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-17 05:31:42.275098 | orchestrator | Friday 17 April 2026 05:31:40 +0000 (0:00:01.708) 0:04:39.319 ********** 2026-04-17 05:31:42.275109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:31:42.275128 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:31:42.275139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:31:42.275150 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:31:42.275169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 05:32:09.066964 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:09.067064 | orchestrator | 2026-04-17 05:32:09.067075 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-17 05:32:09.067085 | orchestrator | Friday 17 April 2026 05:31:42 +0000 (0:00:01.514) 0:04:40.833 ********** 2026-04-17 05:32:09.067093 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:09.067101 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:09.067111 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:09.067119 | orchestrator | 2026-04-17 05:32:09.067127 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 05:32:09.067134 | orchestrator | Friday 17 April 2026 05:31:44 +0000 (0:00:01.821) 0:04:42.654 ********** 2026-04-17 05:32:09.067141 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:32:09.067150 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:32:09.067157 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:32:09.067165 | orchestrator | 2026-04-17 05:32:09.067172 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 05:32:09.067179 | orchestrator | Friday 17 April 2026 05:31:47 +0000 (0:00:03.135) 0:04:45.790 ********** 2026-04-17 05:32:09.067186 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:32:09.067193 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:32:09.067201 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:32:09.067208 | orchestrator | 2026-04-17 05:32:09.067215 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-17 05:32:09.067222 | orchestrator | Friday 17 April 2026 05:31:50 +0000 (0:00:03.317) 0:04:49.108 ********** 2026-04-17 05:32:09.067229 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-17 05:32:09.067238 | orchestrator | 2026-04-17 05:32:09.067245 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-17 05:32:09.067252 | orchestrator | Friday 17 April 2026 05:31:51 +0000 (0:00:00.970) 0:04:50.078 ********** 2026-04-17 05:32:09.067261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 05:32:09.067289 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:09.067297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 05:32:09.067305 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:09.067313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 05:32:09.067320 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:09.067328 | orchestrator | 2026-04-17 05:32:09.067335 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-17 05:32:09.067342 | orchestrator | Friday 17 April 2026 05:31:53 +0000 (0:00:01.553) 0:04:51.631 ********** 2026-04-17 05:32:09.067350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 05:32:09.067357 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:09.067493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 05:32:09.067511 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:09.067520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 05:32:09.067527 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:09.067534 | orchestrator | 2026-04-17 05:32:09.067542 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-17 05:32:09.067549 | orchestrator | Friday 17 April 2026 05:31:54 +0000 (0:00:01.617) 0:04:53.249 ********** 2026-04-17 05:32:09.067556 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:09.067563 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:09.067571 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:09.067585 | orchestrator | 2026-04-17 05:32:09.067592 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 05:32:09.067599 | orchestrator | Friday 17 April 2026 05:31:56 +0000 (0:00:01.738) 0:04:54.987 ********** 2026-04-17 05:32:09.067606 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:32:09.067614 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:32:09.067621 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:32:09.067628 | orchestrator | 2026-04-17 05:32:09.067635 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 05:32:09.067642 | orchestrator | Friday 17 April 2026 05:31:59 +0000 (0:00:02.726) 0:04:57.714 ********** 2026-04-17 05:32:09.067649 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:32:09.067656 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:32:09.067664 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:32:09.067671 | orchestrator | 2026-04-17 05:32:09.067678 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-17 05:32:09.067685 | orchestrator | Friday 17 April 2026 05:32:03 +0000 (0:00:04.396) 0:05:02.111 ********** 2026-04-17 05:32:09.067692 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:32:09.067699 | orchestrator | 2026-04-17 05:32:09.067706 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-17 05:32:09.067713 | orchestrator | Friday 17 April 2026 05:32:05 +0000 (0:00:01.439) 0:05:03.550 ********** 2026-04-17 05:32:09.067726 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 05:32:09.067735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 05:32:09.067750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 05:32:09.331876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 05:32:09.331988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:32:09.332015 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 05:32:09.332028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 05:32:09.332039 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 05:32:09.332066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 05:32:09.332084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 05:32:09.332094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 05:32:09.332109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 05:32:09.332120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 05:32:09.332130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:32:09.332140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:32:09.332157 | orchestrator | 2026-04-17 05:32:09.332175 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-17 05:32:10.367347 | orchestrator | Friday 17 April 2026 05:32:09 +0000 (0:00:04.242) 0:05:07.792 ********** 2026-04-17 05:32:10.367494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 05:32:10.367520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 05:32:10.367550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 05:32:10.367564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 05:32:10.367577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:32:10.367589 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:10.367623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 05:32:10.367659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 05:32:10.367671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 05:32:10.367688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 05:32:10.367700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:32:10.367712 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:10.367723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 05:32:10.367752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 05:32:23.245523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 05:32:23.245647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 05:32:23.245684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 05:32:23.245698 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:23.245713 | orchestrator | 2026-04-17 05:32:23.245725 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-17 05:32:23.245738 | orchestrator | Friday 17 April 2026 05:32:10 +0000 (0:00:01.185) 0:05:08.978 ********** 2026-04-17 05:32:23.245750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 05:32:23.245764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 05:32:23.245777 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:23.245788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 05:32:23.245818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 05:32:23.245830 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:23.245841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 05:32:23.245852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 05:32:23.245863 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:23.245873 | orchestrator | 2026-04-17 05:32:23.245884 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-17 05:32:23.245895 | orchestrator | Friday 17 April 2026 05:32:11 +0000 (0:00:01.047) 0:05:10.026 ********** 2026-04-17 05:32:23.245906 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:32:23.245917 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:32:23.245928 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:32:23.245939 | orchestrator | 2026-04-17 05:32:23.245949 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-17 05:32:23.245960 | orchestrator | Friday 17 April 2026 05:32:13 +0000 (0:00:01.768) 0:05:11.795 ********** 2026-04-17 05:32:23.245971 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:32:23.245982 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:32:23.246009 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:32:23.246085 | orchestrator | 2026-04-17 05:32:23.246098 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-17 05:32:23.246110 | orchestrator | Friday 17 April 2026 05:32:15 +0000 (0:00:02.315) 0:05:14.110 ********** 2026-04-17 05:32:23.246122 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:32:23.246136 | orchestrator | 2026-04-17 05:32:23.246148 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-17 05:32:23.246161 | orchestrator | Friday 17 April 2026 05:32:17 +0000 (0:00:01.511) 0:05:15.621 ********** 2026-04-17 05:32:23.246178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:32:23.246237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:32:23.246285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:32:23.246311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:32:24.429829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:32:24.429956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:32:24.429997 | orchestrator | 2026-04-17 05:32:24.430011 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-17 05:32:24.430089 | orchestrator | Friday 17 April 2026 05:32:23 +0000 (0:00:06.672) 0:05:22.293 ********** 2026-04-17 05:32:24.430103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:32:24.430137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:32:24.430150 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:24.430170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:32:24.430191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:32:24.430203 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:24.430215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:32:24.430236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:32:32.680021 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:32.680131 | orchestrator | 2026-04-17 05:32:32.680190 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-17 05:32:32.680204 | orchestrator | Friday 17 April 2026 05:32:24 +0000 (0:00:00.708) 0:05:23.002 ********** 2026-04-17 05:32:32.680218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:32.680250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-17 05:32:32.680284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-17 05:32:32.680298 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:32.680309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:32.680320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-17 05:32:32.680331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-17 05:32:32.680342 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:32.680353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:32.680364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-17 05:32:32.680374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-17 05:32:32.680385 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:32.680396 | orchestrator | 2026-04-17 05:32:32.680407 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-17 05:32:32.680418 | orchestrator | Friday 17 April 2026 05:32:25 +0000 (0:00:00.983) 0:05:23.986 ********** 2026-04-17 05:32:32.680429 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:32.680439 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:32.680450 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:32.680460 | orchestrator | 2026-04-17 05:32:32.680471 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-17 05:32:32.680481 | orchestrator | Friday 17 April 2026 05:32:26 +0000 (0:00:00.985) 0:05:24.971 ********** 2026-04-17 05:32:32.680493 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:32.680504 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:32.680514 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:32.680525 | orchestrator | 2026-04-17 05:32:32.680535 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-17 05:32:32.680547 | orchestrator | Friday 17 April 2026 05:32:28 +0000 (0:00:01.656) 0:05:26.627 ********** 2026-04-17 05:32:32.680559 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:32:32.680572 | orchestrator | 2026-04-17 05:32:32.680609 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-17 05:32:32.680630 | orchestrator | Friday 17 April 2026 05:32:29 +0000 (0:00:01.498) 0:05:28.126 ********** 2026-04-17 05:32:32.680672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-17 05:32:32.680690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 05:32:32.680705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:32.680719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:32.680733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 05:32:32.680756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-17 05:32:34.615987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 05:32:34.616194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:34.616224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:34.616242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 05:32:34.616261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-17 05:32:34.616282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 05:32:34.616356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:34.616382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:34.616394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 05:32:34.616406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:32:34.616419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-17 05:32:34.616440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:34.616461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:35.996911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 05:32:35.997016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:32:35.997035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-17 05:32:35.997050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:35.997084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:35.997203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 05:32:35.997228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:32:35.997240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-17 05:32:35.997252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:35.997272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:35.997284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 05:32:35.997296 | orchestrator | 2026-04-17 05:32:35.997308 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-17 05:32:35.997321 | orchestrator | Friday 17 April 2026 05:32:35 +0000 (0:00:05.866) 0:05:33.992 ********** 2026-04-17 05:32:35.997347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-17 05:32:36.185843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 05:32:36.185990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.186009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.186155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 05:32:36.186181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:32:36.186248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-17 05:32:36.186274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.186290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.186302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-17 05:32:36.186325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 05:32:36.186342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 05:32:36.186354 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:36.186368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.186390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.368899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 05:32:36.369030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:32:36.369128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-17 05:32:36.369168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.369200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-17 05:32:36.369213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.369233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 05:32:36.369245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 05:32:36.369257 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:36.369271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.369283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:36.369300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 05:32:36.369321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:32:44.474360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-17 05:32:44.474480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:44.474499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:32:44.474514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 05:32:44.474526 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:44.474539 | orchestrator | 2026-04-17 05:32:44.474567 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-17 05:32:44.474580 | orchestrator | Friday 17 April 2026 05:32:36 +0000 (0:00:00.997) 0:05:34.990 ********** 2026-04-17 05:32:44.474593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-17 05:32:44.474607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-17 05:32:44.474622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:44.474673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:44.474688 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:44.474700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-17 05:32:44.474712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-17 05:32:44.474723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:44.474735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:44.474746 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:44.474756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-17 05:32:44.474767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-17 05:32:44.474779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:44.474790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-17 05:32:44.474801 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:44.474812 | orchestrator | 2026-04-17 05:32:44.474823 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-17 05:32:44.474834 | orchestrator | Friday 17 April 2026 05:32:38 +0000 (0:00:01.491) 0:05:36.481 ********** 2026-04-17 05:32:44.474845 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:44.474865 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:44.474875 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:44.474886 | orchestrator | 2026-04-17 05:32:44.474898 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-17 05:32:44.474910 | orchestrator | Friday 17 April 2026 05:32:38 +0000 (0:00:00.673) 0:05:37.154 ********** 2026-04-17 05:32:44.474923 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:44.474935 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:44.474947 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:44.474959 | orchestrator | 2026-04-17 05:32:44.474971 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-17 05:32:44.474983 | orchestrator | Friday 17 April 2026 05:32:40 +0000 (0:00:01.755) 0:05:38.909 ********** 2026-04-17 05:32:44.475028 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:32:44.475048 | orchestrator | 2026-04-17 05:32:44.475067 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-17 05:32:44.475085 | orchestrator | Friday 17 April 2026 05:32:42 +0000 (0:00:02.063) 0:05:40.973 ********** 2026-04-17 05:32:44.475111 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:32:53.991358 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:32:53.991478 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:32:53.991596 | orchestrator | 2026-04-17 05:32:53.991623 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-17 05:32:53.991643 | orchestrator | Friday 17 April 2026 05:32:44 +0000 (0:00:02.374) 0:05:43.347 ********** 2026-04-17 05:32:53.991664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:32:53.991685 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:53.991734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:32:53.991750 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:53.991762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:32:53.991773 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:53.991784 | orchestrator | 2026-04-17 05:32:53.991796 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-17 05:32:53.991807 | orchestrator | Friday 17 April 2026 05:32:45 +0000 (0:00:00.428) 0:05:43.775 ********** 2026-04-17 05:32:53.991818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 05:32:53.991907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 05:32:53.991921 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:53.991932 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:53.991943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 05:32:53.991954 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:53.991964 | orchestrator | 2026-04-17 05:32:53.991975 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-17 05:32:53.991986 | orchestrator | Friday 17 April 2026 05:32:46 +0000 (0:00:00.957) 0:05:44.733 ********** 2026-04-17 05:32:53.991997 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:53.992007 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:53.992018 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:53.992029 | orchestrator | 2026-04-17 05:32:53.992039 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-17 05:32:53.992050 | orchestrator | Friday 17 April 2026 05:32:46 +0000 (0:00:00.473) 0:05:45.207 ********** 2026-04-17 05:32:53.992061 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:53.992071 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:32:53.992082 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:32:53.992093 | orchestrator | 2026-04-17 05:32:53.992103 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-17 05:32:53.992115 | orchestrator | Friday 17 April 2026 05:32:48 +0000 (0:00:01.269) 0:05:46.476 ********** 2026-04-17 05:32:53.992125 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:32:53.992136 | orchestrator | 2026-04-17 05:32:53.992147 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-17 05:32:53.992157 | orchestrator | Friday 17 April 2026 05:32:49 +0000 (0:00:01.891) 0:05:48.368 ********** 2026-04-17 05:32:53.992169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 05:32:53.992194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 05:32:58.086286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 05:32:58.086405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 05:32:58.086424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 05:32:58.086458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 05:32:58.086493 | orchestrator | 2026-04-17 05:32:58.086506 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-17 05:32:58.086520 | orchestrator | Friday 17 April 2026 05:32:57 +0000 (0:00:07.654) 0:05:56.022 ********** 2026-04-17 05:32:58.086539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 05:32:58.086553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 05:32:58.086566 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:32:58.086578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 05:32:58.086599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 05:33:09.803919 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:33:09.804059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 05:33:09.804088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 05:33:09.804110 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:33:09.804128 | orchestrator | 2026-04-17 05:33:09.804147 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-17 05:33:09.804166 | orchestrator | Friday 17 April 2026 05:32:58 +0000 (0:00:00.781) 0:05:56.803 ********** 2026-04-17 05:33:09.804187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-17 05:33:09.804210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-17 05:33:09.804260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:33:09.804299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:33:09.804319 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:33:09.804339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-17 05:33:09.804357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-17 05:33:09.804392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:33:09.804414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:33:09.804427 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:33:09.804441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-17 05:33:09.804454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-17 05:33:09.804468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:33:09.804481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-17 05:33:09.804494 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:33:09.804507 | orchestrator | 2026-04-17 05:33:09.804520 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-17 05:33:09.804533 | orchestrator | Friday 17 April 2026 05:33:00 +0000 (0:00:02.119) 0:05:58.923 ********** 2026-04-17 05:33:09.804545 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:33:09.804558 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:33:09.804571 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:33:09.804584 | orchestrator | 2026-04-17 05:33:09.804597 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-17 05:33:09.804610 | orchestrator | Friday 17 April 2026 05:33:01 +0000 (0:00:01.306) 0:06:00.229 ********** 2026-04-17 05:33:09.804623 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:33:09.804635 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:33:09.804649 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:33:09.804673 | orchestrator | 2026-04-17 05:33:09.804714 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-17 05:33:09.804725 | orchestrator | Friday 17 April 2026 05:33:04 +0000 (0:00:02.411) 0:06:02.641 ********** 2026-04-17 05:33:09.804736 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:33:09.804747 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:33:09.804758 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:33:09.804768 | orchestrator | 2026-04-17 05:33:09.804779 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-17 05:33:09.804790 | orchestrator | Friday 17 April 2026 05:33:04 +0000 (0:00:00.436) 0:06:03.077 ********** 2026-04-17 05:33:09.804801 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:33:09.804811 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:33:09.804822 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:33:09.804833 | orchestrator | 2026-04-17 05:33:09.804843 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-17 05:33:09.804854 | orchestrator | Friday 17 April 2026 05:33:05 +0000 (0:00:00.867) 0:06:03.945 ********** 2026-04-17 05:33:09.804865 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:33:09.804876 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:33:09.804886 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:33:09.804897 | orchestrator | 2026-04-17 05:33:09.804907 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-17 05:33:09.804918 | orchestrator | Friday 17 April 2026 05:33:05 +0000 (0:00:00.381) 0:06:04.326 ********** 2026-04-17 05:33:09.804929 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:33:09.804940 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:33:09.804950 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:33:09.804961 | orchestrator | 2026-04-17 05:33:09.804972 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-17 05:33:09.804982 | orchestrator | Friday 17 April 2026 05:33:06 +0000 (0:00:00.367) 0:06:04.694 ********** 2026-04-17 05:33:09.804993 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:33:09.805003 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:33:09.805014 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:33:09.805025 | orchestrator | 2026-04-17 05:33:09.805035 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-17 05:33:09.805046 | orchestrator | Friday 17 April 2026 05:33:06 +0000 (0:00:00.368) 0:06:05.063 ********** 2026-04-17 05:33:09.805107 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:33:09.805121 | orchestrator | 2026-04-17 05:33:09.805131 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-17 05:33:09.805142 | orchestrator | Friday 17 April 2026 05:33:08 +0000 (0:00:02.078) 0:06:07.142 ********** 2026-04-17 05:33:09.805171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 05:33:13.747781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 05:33:13.747913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:33:13.747929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 05:33:13.747941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:33:13.747952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 05:33:13.747964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:33:13.748008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:33:13.748022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 05:33:13.748041 | orchestrator | 2026-04-17 05:33:13.748054 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-17 05:33:13.748066 | orchestrator | Friday 17 April 2026 05:33:12 +0000 (0:00:03.599) 0:06:10.741 ********** 2026-04-17 05:33:13.748078 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:33:13.748090 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:33:13.748101 | orchestrator | } 2026-04-17 05:33:13.748113 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:33:13.748123 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:33:13.748134 | orchestrator | } 2026-04-17 05:33:13.748144 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:33:13.748155 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:33:13.748165 | orchestrator | } 2026-04-17 05:33:13.748176 | orchestrator | 2026-04-17 05:33:13.748187 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:33:13.748198 | orchestrator | Friday 17 April 2026 05:33:13 +0000 (0:00:00.966) 0:06:11.708 ********** 2026-04-17 05:33:13.748209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 05:33:13.748220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:33:13.748232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:33:13.748243 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:33:13.748259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 05:33:13.748284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:35:00.683572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:35:00.683692 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:00.683712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 05:35:00.683727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 05:35:00.683739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 05:35:00.683750 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:00.683762 | orchestrator | 2026-04-17 05:35:00.683774 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-17 05:35:00.683786 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 05:35:00.683797 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 05:35:00.683819 | orchestrator | Friday 17 April 2026 05:33:15 +0000 (0:00:01.967) 0:06:13.675 ********** 2026-04-17 05:35:00.683830 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:00.683868 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:00.683879 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:00.683890 | orchestrator | 2026-04-17 05:35:00.683902 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-17 05:35:00.683913 | orchestrator | Friday 17 April 2026 05:33:16 +0000 (0:00:00.831) 0:06:14.507 ********** 2026-04-17 05:35:00.683924 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:00.683934 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:00.683945 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:00.683956 | orchestrator | 2026-04-17 05:35:00.683966 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-17 05:35:00.683977 | orchestrator | Friday 17 April 2026 05:33:16 +0000 (0:00:00.419) 0:06:14.926 ********** 2026-04-17 05:35:00.684025 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:35:00.684037 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:35:00.684048 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:35:00.684061 | orchestrator | 2026-04-17 05:35:00.684074 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-17 05:35:00.684088 | orchestrator | Friday 17 April 2026 05:33:23 +0000 (0:00:06.647) 0:06:21.574 ********** 2026-04-17 05:35:00.684112 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:35:00.684125 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:35:00.684138 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:35:00.684149 | orchestrator | 2026-04-17 05:35:00.684160 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-17 05:35:00.684170 | orchestrator | Friday 17 April 2026 05:33:29 +0000 (0:00:06.132) 0:06:27.706 ********** 2026-04-17 05:35:00.684181 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:35:00.684192 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:35:00.684203 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:35:00.684214 | orchestrator | 2026-04-17 05:35:00.684243 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-17 05:35:00.684255 | orchestrator | Friday 17 April 2026 05:33:35 +0000 (0:00:06.133) 0:06:33.840 ********** 2026-04-17 05:35:00.684266 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:35:00.684277 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:35:00.684287 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:35:00.684298 | orchestrator | 2026-04-17 05:35:00.684309 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-17 05:35:00.684320 | orchestrator | Friday 17 April 2026 05:33:42 +0000 (0:00:06.736) 0:06:40.577 ********** 2026-04-17 05:35:00.684330 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:00.684342 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:00.684353 | orchestrator | 2026-04-17 05:35:00.684363 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-17 05:35:00.684374 | orchestrator | Friday 17 April 2026 05:33:45 +0000 (0:00:03.305) 0:06:43.882 ********** 2026-04-17 05:35:00.684385 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:35:00.684396 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:35:00.684406 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:35:00.684417 | orchestrator | 2026-04-17 05:35:00.684428 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-17 05:35:00.684438 | orchestrator | Friday 17 April 2026 05:33:57 +0000 (0:00:12.412) 0:06:56.294 ********** 2026-04-17 05:35:00.684449 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:00.684460 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:00.684471 | orchestrator | 2026-04-17 05:35:00.684482 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-17 05:35:00.684492 | orchestrator | Friday 17 April 2026 05:34:01 +0000 (0:00:03.599) 0:06:59.894 ********** 2026-04-17 05:35:00.684503 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:35:00.684563 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:35:00.684574 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:35:00.684585 | orchestrator | 2026-04-17 05:35:00.684598 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-17 05:35:00.684630 | orchestrator | Friday 17 April 2026 05:34:07 +0000 (0:00:06.500) 0:07:06.395 ********** 2026-04-17 05:35:00.684649 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:00.684667 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:00.684686 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:35:00.684705 | orchestrator | 2026-04-17 05:35:00.684717 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-17 05:35:00.684728 | orchestrator | Friday 17 April 2026 05:34:13 +0000 (0:00:05.822) 0:07:12.217 ********** 2026-04-17 05:35:00.684739 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:00.684750 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:00.684760 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:35:00.684771 | orchestrator | 2026-04-17 05:35:00.684782 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-17 05:35:00.684793 | orchestrator | Friday 17 April 2026 05:34:19 +0000 (0:00:05.870) 0:07:18.088 ********** 2026-04-17 05:35:00.684804 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:00.684815 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:00.684825 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:35:00.684836 | orchestrator | 2026-04-17 05:35:00.684847 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-17 05:35:00.684857 | orchestrator | Friday 17 April 2026 05:34:25 +0000 (0:00:05.837) 0:07:23.925 ********** 2026-04-17 05:35:00.684868 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:00.684879 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:00.684890 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:35:00.684900 | orchestrator | 2026-04-17 05:35:00.684911 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-04-17 05:35:00.684922 | orchestrator | Friday 17 April 2026 05:34:31 +0000 (0:00:06.351) 0:07:30.277 ********** 2026-04-17 05:35:00.684933 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:00.684944 | orchestrator | 2026-04-17 05:35:00.684954 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-17 05:35:00.684965 | orchestrator | Friday 17 April 2026 05:34:35 +0000 (0:00:03.719) 0:07:33.996 ********** 2026-04-17 05:35:00.684976 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:00.684987 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:00.684998 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:35:00.685060 | orchestrator | 2026-04-17 05:35:00.685073 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-04-17 05:35:00.685084 | orchestrator | Friday 17 April 2026 05:34:47 +0000 (0:00:12.079) 0:07:46.075 ********** 2026-04-17 05:35:00.685095 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:00.685106 | orchestrator | 2026-04-17 05:35:00.685117 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-17 05:35:00.685128 | orchestrator | Friday 17 April 2026 05:34:52 +0000 (0:00:04.592) 0:07:50.668 ********** 2026-04-17 05:35:00.685139 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:00.685150 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:00.685161 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:35:00.685171 | orchestrator | 2026-04-17 05:35:00.685182 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-17 05:35:00.685200 | orchestrator | Friday 17 April 2026 05:34:58 +0000 (0:00:05.897) 0:07:56.566 ********** 2026-04-17 05:35:00.685211 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:00.685222 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:00.685234 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:00.685244 | orchestrator | 2026-04-17 05:35:00.685255 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-17 05:35:00.685266 | orchestrator | Friday 17 April 2026 05:34:59 +0000 (0:00:00.985) 0:07:57.552 ********** 2026-04-17 05:35:00.685277 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:00.685288 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:00.685299 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:00.685318 | orchestrator | 2026-04-17 05:35:00.685329 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:35:00.685341 | orchestrator | testbed-node-0 : ok=129  changed=30  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-17 05:35:00.685363 | orchestrator | testbed-node-1 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-17 05:35:02.746617 | orchestrator | testbed-node-2 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-17 05:35:02.746721 | orchestrator | 2026-04-17 05:35:02.746738 | orchestrator | 2026-04-17 05:35:02.746751 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:35:02.746764 | orchestrator | Friday 17 April 2026 05:35:01 +0000 (0:00:02.451) 0:08:00.004 ********** 2026-04-17 05:35:02.746776 | orchestrator | =============================================================================== 2026-04-17 05:35:02.746787 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.41s 2026-04-17 05:35:02.746799 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.08s 2026-04-17 05:35:02.746810 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.65s 2026-04-17 05:35:02.746821 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 6.74s 2026-04-17 05:35:02.746832 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.67s 2026-04-17 05:35:02.746844 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 6.65s 2026-04-17 05:35:02.746855 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.50s 2026-04-17 05:35:02.746866 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 6.35s 2026-04-17 05:35:02.746877 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 6.13s 2026-04-17 05:35:02.746889 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 6.13s 2026-04-17 05:35:02.746900 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.99s 2026-04-17 05:35:02.746911 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 5.90s 2026-04-17 05:35:02.746922 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 5.87s 2026-04-17 05:35:02.746933 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.87s 2026-04-17 05:35:02.746945 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 5.84s 2026-04-17 05:35:02.746956 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 5.82s 2026-04-17 05:35:02.746967 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.91s 2026-04-17 05:35:02.746978 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.89s 2026-04-17 05:35:02.746989 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.64s 2026-04-17 05:35:02.747001 | orchestrator | loadbalancer : Wait for master proxysql to start ------------------------ 4.59s 2026-04-17 05:35:02.992824 | orchestrator | + osism apply -a upgrade opensearch 2026-04-17 05:35:04.363128 | orchestrator | 2026-04-17 05:35:04 | INFO  | Prepare task for execution of opensearch. 2026-04-17 05:35:04.434362 | orchestrator | 2026-04-17 05:35:04 | INFO  | Task c8dd1468-3ac7-45d8-b552-908ebe134137 (opensearch) was prepared for execution. 2026-04-17 05:35:04.434449 | orchestrator | 2026-04-17 05:35:04 | INFO  | It takes a moment until task c8dd1468-3ac7-45d8-b552-908ebe134137 (opensearch) has been started and output is visible here. 2026-04-17 05:35:22.324845 | orchestrator | 2026-04-17 05:35:22.324968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:35:22.324986 | orchestrator | 2026-04-17 05:35:22.324998 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:35:22.325034 | orchestrator | Friday 17 April 2026 05:35:09 +0000 (0:00:01.860) 0:00:01.860 ********** 2026-04-17 05:35:22.325046 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:22.325058 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:22.325069 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:22.325080 | orchestrator | 2026-04-17 05:35:22.325091 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:35:22.325101 | orchestrator | Friday 17 April 2026 05:35:11 +0000 (0:00:01.946) 0:00:03.807 ********** 2026-04-17 05:35:22.325113 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-17 05:35:22.325124 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-17 05:35:22.325135 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-17 05:35:22.325146 | orchestrator | 2026-04-17 05:35:22.325157 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-17 05:35:22.325167 | orchestrator | 2026-04-17 05:35:22.325192 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 05:35:22.325203 | orchestrator | Friday 17 April 2026 05:35:13 +0000 (0:00:01.643) 0:00:05.451 ********** 2026-04-17 05:35:22.325214 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:35:22.325226 | orchestrator | 2026-04-17 05:35:22.325236 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-17 05:35:22.325247 | orchestrator | Friday 17 April 2026 05:35:15 +0000 (0:00:02.181) 0:00:07.632 ********** 2026-04-17 05:35:22.325257 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 05:35:22.325268 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 05:35:22.325279 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 05:35:22.325290 | orchestrator | 2026-04-17 05:35:22.325301 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-17 05:35:22.325346 | orchestrator | Friday 17 April 2026 05:35:18 +0000 (0:00:02.950) 0:00:10.583 ********** 2026-04-17 05:35:22.325370 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:22.325397 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:22.325444 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:22.325466 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:22.325483 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:22.325499 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:22.325519 | orchestrator | 2026-04-17 05:35:22.325532 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 05:35:22.325545 | orchestrator | Friday 17 April 2026 05:35:21 +0000 (0:00:02.744) 0:00:13.328 ********** 2026-04-17 05:35:22.325558 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:35:22.325571 | orchestrator | 2026-04-17 05:35:22.325590 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-17 05:35:27.515585 | orchestrator | Friday 17 April 2026 05:35:23 +0000 (0:00:02.065) 0:00:15.394 ********** 2026-04-17 05:35:27.515686 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:27.515702 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:27.515711 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:27.515720 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:27.515763 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:27.515773 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:27.515781 | orchestrator | 2026-04-17 05:35:27.515789 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-17 05:35:27.515798 | orchestrator | Friday 17 April 2026 05:35:26 +0000 (0:00:03.575) 0:00:18.969 ********** 2026-04-17 05:35:27.515805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:35:27.515825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:35:30.061174 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:35:30.061331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:35:30.061349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:35:30.061360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:35:30.061387 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:30.061411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:35:30.061420 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:30.061429 | orchestrator | 2026-04-17 05:35:30.061437 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-17 05:35:30.061450 | orchestrator | Friday 17 April 2026 05:35:29 +0000 (0:00:02.122) 0:00:21.092 ********** 2026-04-17 05:35:30.061457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:35:30.061466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:35:30.061479 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:35:30.061487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:35:30.061505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:35:34.049412 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:35:34.049521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:35:34.049543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:35:34.049577 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:35:34.049589 | orchestrator | 2026-04-17 05:35:34.049600 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-17 05:35:34.049611 | orchestrator | Friday 17 April 2026 05:35:31 +0000 (0:00:02.411) 0:00:23.504 ********** 2026-04-17 05:35:34.049621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:34.049663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:34.049675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:34.049686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:34.049705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:34.049732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:47.791269 | orchestrator | 2026-04-17 05:35:47.791390 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-17 05:35:47.791409 | orchestrator | Friday 17 April 2026 05:35:35 +0000 (0:00:03.654) 0:00:27.159 ********** 2026-04-17 05:35:47.791421 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:47.791433 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:47.791444 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:47.791455 | orchestrator | 2026-04-17 05:35:47.791466 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-17 05:35:47.791503 | orchestrator | Friday 17 April 2026 05:35:38 +0000 (0:00:03.678) 0:00:30.837 ********** 2026-04-17 05:35:47.791514 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:35:47.791525 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:35:47.791536 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:35:47.791546 | orchestrator | 2026-04-17 05:35:47.791557 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-17 05:35:47.791567 | orchestrator | Friday 17 April 2026 05:35:42 +0000 (0:00:03.419) 0:00:34.257 ********** 2026-04-17 05:35:47.791581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:47.791596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:47.791608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 05:35:47.791654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:47.791680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:47.791694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-17 05:35:47.791706 | orchestrator | 2026-04-17 05:35:47.791717 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-17 05:35:47.791729 | orchestrator | Friday 17 April 2026 05:35:45 +0000 (0:00:03.490) 0:00:37.748 ********** 2026-04-17 05:35:47.791740 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:35:47.791752 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:35:47.791763 | orchestrator | } 2026-04-17 05:35:47.791774 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:35:47.791784 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:35:47.791795 | orchestrator | } 2026-04-17 05:35:47.791807 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:35:47.791820 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:35:47.791832 | orchestrator | } 2026-04-17 05:35:47.791844 | orchestrator | 2026-04-17 05:35:47.791857 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:35:47.791869 | orchestrator | Friday 17 April 2026 05:35:47 +0000 (0:00:01.553) 0:00:39.301 ********** 2026-04-17 05:35:47.791897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:39:02.065916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:39:02.066094 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:39:02.066115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:39:02.066154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:39:02.066193 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:39:02.066224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 05:39:02.066237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-17 05:39:02.066250 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:39:02.066261 | orchestrator | 2026-04-17 05:39:02.066273 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 05:39:02.066286 | orchestrator | Friday 17 April 2026 05:35:50 +0000 (0:00:02.697) 0:00:41.998 ********** 2026-04-17 05:39:02.066296 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:39:02.066307 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:39:02.066317 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:39:02.066328 | orchestrator | 2026-04-17 05:39:02.066339 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 05:39:02.066350 | orchestrator | Friday 17 April 2026 05:35:51 +0000 (0:00:01.526) 0:00:43.525 ********** 2026-04-17 05:39:02.066360 | orchestrator | 2026-04-17 05:39:02.066371 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 05:39:02.066381 | orchestrator | Friday 17 April 2026 05:35:52 +0000 (0:00:00.452) 0:00:43.978 ********** 2026-04-17 05:39:02.066392 | orchestrator | 2026-04-17 05:39:02.066403 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 05:39:02.066413 | orchestrator | Friday 17 April 2026 05:35:52 +0000 (0:00:00.477) 0:00:44.455 ********** 2026-04-17 05:39:02.066427 | orchestrator | 2026-04-17 05:39:02.066439 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-17 05:39:02.066452 | orchestrator | Friday 17 April 2026 05:35:53 +0000 (0:00:00.821) 0:00:45.276 ********** 2026-04-17 05:39:02.066464 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:39:02.066478 | orchestrator | 2026-04-17 05:39:02.066490 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-17 05:39:02.066512 | orchestrator | Friday 17 April 2026 05:35:56 +0000 (0:00:03.437) 0:00:48.714 ********** 2026-04-17 05:39:02.066525 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:39:02.066538 | orchestrator | 2026-04-17 05:39:02.066551 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-17 05:39:02.066563 | orchestrator | Friday 17 April 2026 05:36:05 +0000 (0:00:08.704) 0:00:57.418 ********** 2026-04-17 05:39:02.066575 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:39:02.066589 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:39:02.066601 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:39:02.066613 | orchestrator | 2026-04-17 05:39:02.066625 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-17 05:39:02.066638 | orchestrator | Friday 17 April 2026 05:37:18 +0000 (0:01:13.010) 0:02:10.429 ********** 2026-04-17 05:39:02.066678 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:39:02.066691 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:39:02.066704 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:39:02.066718 | orchestrator | 2026-04-17 05:39:02.066730 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 05:39:02.066746 | orchestrator | Friday 17 April 2026 05:38:49 +0000 (0:01:31.483) 0:03:41.912 ********** 2026-04-17 05:39:02.066758 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:39:02.066769 | orchestrator | 2026-04-17 05:39:02.066780 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-17 05:39:02.066790 | orchestrator | Friday 17 April 2026 05:38:51 +0000 (0:00:02.039) 0:03:43.952 ********** 2026-04-17 05:39:02.066801 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:39:02.066812 | orchestrator | 2026-04-17 05:39:02.066822 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-17 05:39:02.066833 | orchestrator | Friday 17 April 2026 05:38:55 +0000 (0:00:03.610) 0:03:47.563 ********** 2026-04-17 05:39:02.066844 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:39:02.066854 | orchestrator | 2026-04-17 05:39:02.066865 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-17 05:39:02.066876 | orchestrator | Friday 17 April 2026 05:38:58 +0000 (0:00:03.204) 0:03:50.767 ********** 2026-04-17 05:39:02.066887 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:39:02.066897 | orchestrator | 2026-04-17 05:39:02.066908 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-17 05:39:02.066926 | orchestrator | Friday 17 April 2026 05:39:02 +0000 (0:00:03.254) 0:03:54.021 ********** 2026-04-17 05:39:05.616923 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:39:05.617021 | orchestrator | 2026-04-17 05:39:05.617037 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-17 05:39:05.617050 | orchestrator | Friday 17 April 2026 05:39:03 +0000 (0:00:01.321) 0:03:55.343 ********** 2026-04-17 05:39:05.617061 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:39:05.617072 | orchestrator | 2026-04-17 05:39:05.617083 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:39:05.617098 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:39:05.617119 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 05:39:05.617138 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 05:39:05.617157 | orchestrator | 2026-04-17 05:39:05.617175 | orchestrator | 2026-04-17 05:39:05.617193 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:39:05.617212 | orchestrator | Friday 17 April 2026 05:39:05 +0000 (0:00:01.685) 0:03:57.029 ********** 2026-04-17 05:39:05.617231 | orchestrator | =============================================================================== 2026-04-17 05:39:05.617337 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 91.48s 2026-04-17 05:39:05.617351 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.01s 2026-04-17 05:39:05.617362 | orchestrator | opensearch : Perform a flush -------------------------------------------- 8.70s 2026-04-17 05:39:05.617372 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.68s 2026-04-17 05:39:05.617383 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.65s 2026-04-17 05:39:05.617394 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.61s 2026-04-17 05:39:05.617404 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.58s 2026-04-17 05:39:05.617415 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.49s 2026-04-17 05:39:05.617426 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.44s 2026-04-17 05:39:05.617436 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.42s 2026-04-17 05:39:05.617447 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.25s 2026-04-17 05:39:05.617457 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 3.20s 2026-04-17 05:39:05.617468 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.95s 2026-04-17 05:39:05.617478 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.74s 2026-04-17 05:39:05.617489 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.70s 2026-04-17 05:39:05.617500 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 2.41s 2026-04-17 05:39:05.617516 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.18s 2026-04-17 05:39:05.617536 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 2.12s 2026-04-17 05:39:05.617555 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.07s 2026-04-17 05:39:05.617573 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.04s 2026-04-17 05:39:05.875969 | orchestrator | + osism apply -a upgrade memcached 2026-04-17 05:39:07.202419 | orchestrator | 2026-04-17 05:39:07 | INFO  | Prepare task for execution of memcached. 2026-04-17 05:39:07.270931 | orchestrator | 2026-04-17 05:39:07 | INFO  | Task 44e7debf-be5b-453a-9f37-f70465262c2a (memcached) was prepared for execution. 2026-04-17 05:39:07.271024 | orchestrator | 2026-04-17 05:39:07 | INFO  | It takes a moment until task 44e7debf-be5b-453a-9f37-f70465262c2a (memcached) has been started and output is visible here. 2026-04-17 05:39:41.694303 | orchestrator | 2026-04-17 05:39:41.694492 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:39:41.694524 | orchestrator | 2026-04-17 05:39:41.694545 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:39:41.694563 | orchestrator | Friday 17 April 2026 05:39:12 +0000 (0:00:01.590) 0:00:01.590 ********** 2026-04-17 05:39:41.694582 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:39:41.694603 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:39:41.694622 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:39:41.694641 | orchestrator | 2026-04-17 05:39:41.694660 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:39:41.694672 | orchestrator | Friday 17 April 2026 05:39:14 +0000 (0:00:02.119) 0:00:03.710 ********** 2026-04-17 05:39:41.694684 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-17 05:39:41.694695 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-17 05:39:41.694706 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-17 05:39:41.694717 | orchestrator | 2026-04-17 05:39:41.694727 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-17 05:39:41.694738 | orchestrator | 2026-04-17 05:39:41.694776 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-17 05:39:41.694789 | orchestrator | Friday 17 April 2026 05:39:16 +0000 (0:00:02.088) 0:00:05.798 ********** 2026-04-17 05:39:41.694802 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:39:41.694815 | orchestrator | 2026-04-17 05:39:41.694826 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-17 05:39:41.694839 | orchestrator | Friday 17 April 2026 05:39:20 +0000 (0:00:03.549) 0:00:09.348 ********** 2026-04-17 05:39:41.694851 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-17 05:39:41.694863 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-17 05:39:41.694892 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-17 05:39:41.694915 | orchestrator | 2026-04-17 05:39:41.694928 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-17 05:39:41.694941 | orchestrator | Friday 17 April 2026 05:39:22 +0000 (0:00:02.447) 0:00:11.796 ********** 2026-04-17 05:39:41.694954 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-17 05:39:41.694966 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-17 05:39:41.694978 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-17 05:39:41.694990 | orchestrator | 2026-04-17 05:39:41.695003 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-17 05:39:41.695016 | orchestrator | Friday 17 April 2026 05:39:25 +0000 (0:00:02.760) 0:00:14.556 ********** 2026-04-17 05:39:41.695034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 05:39:41.695135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 05:39:41.695187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 05:39:41.695201 | orchestrator | 2026-04-17 05:39:41.695212 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-17 05:39:41.695233 | orchestrator | Friday 17 April 2026 05:39:27 +0000 (0:00:02.275) 0:00:16.832 ********** 2026-04-17 05:39:41.695244 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:39:41.695256 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:39:41.695274 | orchestrator | } 2026-04-17 05:39:41.695294 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:39:41.695313 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:39:41.695331 | orchestrator | } 2026-04-17 05:39:41.695349 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:39:41.695366 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:39:41.695383 | orchestrator | } 2026-04-17 05:39:41.695401 | orchestrator | 2026-04-17 05:39:41.695448 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:39:41.695466 | orchestrator | Friday 17 April 2026 05:39:28 +0000 (0:00:01.419) 0:00:18.252 ********** 2026-04-17 05:39:41.695486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 05:39:41.695507 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:39:41.695526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 05:39:41.695538 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:39:41.695549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 05:39:41.695560 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:39:41.695571 | orchestrator | 2026-04-17 05:39:41.695582 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-17 05:39:41.695593 | orchestrator | Friday 17 April 2026 05:39:31 +0000 (0:00:02.164) 0:00:20.417 ********** 2026-04-17 05:39:41.695604 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:39:41.695614 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:39:41.695625 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:39:41.695635 | orchestrator | 2026-04-17 05:39:41.695649 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:39:41.695684 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:39:41.695704 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:39:41.695722 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:39:41.695739 | orchestrator | 2026-04-17 05:39:41.695756 | orchestrator | 2026-04-17 05:39:41.695775 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:39:41.695808 | orchestrator | Friday 17 April 2026 05:39:41 +0000 (0:00:10.518) 0:00:30.935 ********** 2026-04-17 05:39:42.147908 | orchestrator | =============================================================================== 2026-04-17 05:39:42.148038 | orchestrator | memcached : Restart memcached container -------------------------------- 10.52s 2026-04-17 05:39:42.148054 | orchestrator | memcached : include_tasks ----------------------------------------------- 3.55s 2026-04-17 05:39:42.148065 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.76s 2026-04-17 05:39:42.148076 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.45s 2026-04-17 05:39:42.148087 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.28s 2026-04-17 05:39:42.148101 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.17s 2026-04-17 05:39:42.148120 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.12s 2026-04-17 05:39:42.148139 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.09s 2026-04-17 05:39:42.148169 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.42s 2026-04-17 05:39:42.364742 | orchestrator | + osism apply -a upgrade redis 2026-04-17 05:39:43.736557 | orchestrator | 2026-04-17 05:39:43 | INFO  | Prepare task for execution of redis. 2026-04-17 05:39:43.804055 | orchestrator | 2026-04-17 05:39:43 | INFO  | Task f21e44e5-8974-4855-a490-799dc612e5aa (redis) was prepared for execution. 2026-04-17 05:39:43.804175 | orchestrator | 2026-04-17 05:39:43 | INFO  | It takes a moment until task f21e44e5-8974-4855-a490-799dc612e5aa (redis) has been started and output is visible here. 2026-04-17 05:40:00.848595 | orchestrator | 2026-04-17 05:40:00.848708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:40:00.848723 | orchestrator | 2026-04-17 05:40:00.848735 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:40:00.848745 | orchestrator | Friday 17 April 2026 05:39:49 +0000 (0:00:01.774) 0:00:01.774 ********** 2026-04-17 05:40:00.848755 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:40:00.848766 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:40:00.848776 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:40:00.848786 | orchestrator | 2026-04-17 05:40:00.848796 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:40:00.848806 | orchestrator | Friday 17 April 2026 05:39:50 +0000 (0:00:01.902) 0:00:03.677 ********** 2026-04-17 05:40:00.848815 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-17 05:40:00.848826 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-17 05:40:00.848836 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-17 05:40:00.848846 | orchestrator | 2026-04-17 05:40:00.848855 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-17 05:40:00.848865 | orchestrator | 2026-04-17 05:40:00.848875 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-17 05:40:00.848885 | orchestrator | Friday 17 April 2026 05:39:53 +0000 (0:00:02.541) 0:00:06.218 ********** 2026-04-17 05:40:00.848895 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:40:00.848925 | orchestrator | 2026-04-17 05:40:00.848936 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-17 05:40:00.848945 | orchestrator | Friday 17 April 2026 05:39:56 +0000 (0:00:02.788) 0:00:09.007 ********** 2026-04-17 05:40:00.848958 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.848973 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.848997 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.849009 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.849038 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.849049 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.849066 | orchestrator | 2026-04-17 05:40:00.849076 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-17 05:40:00.849086 | orchestrator | Friday 17 April 2026 05:39:58 +0000 (0:00:02.586) 0:00:11.594 ********** 2026-04-17 05:40:00.849096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.849106 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.849116 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.849131 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:00.849150 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104591 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104709 | orchestrator | 2026-04-17 05:40:09.104726 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-17 05:40:09.104743 | orchestrator | Friday 17 April 2026 05:40:02 +0000 (0:00:04.112) 0:00:15.706 ********** 2026-04-17 05:40:09.104764 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104786 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104806 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104842 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104905 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104940 | orchestrator | 2026-04-17 05:40:09.104960 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-17 05:40:09.104972 | orchestrator | Friday 17 April 2026 05:40:07 +0000 (0:00:04.173) 0:00:19.879 ********** 2026-04-17 05:40:09.104983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.104995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.105006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.105023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.105037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:09.105064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 05:40:37.591417 | orchestrator | 2026-04-17 05:40:37.591536 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-17 05:40:37.591553 | orchestrator | Friday 17 April 2026 05:40:10 +0000 (0:00:03.151) 0:00:23.031 ********** 2026-04-17 05:40:37.591566 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:40:37.591579 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:40:37.591590 | orchestrator | } 2026-04-17 05:40:37.591601 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:40:37.591611 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:40:37.591622 | orchestrator | } 2026-04-17 05:40:37.591633 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:40:37.591644 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:40:37.591655 | orchestrator | } 2026-04-17 05:40:37.591666 | orchestrator | 2026-04-17 05:40:37.591677 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:40:37.591688 | orchestrator | Friday 17 April 2026 05:40:11 +0000 (0:00:01.444) 0:00:24.475 ********** 2026-04-17 05:40:37.591701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-17 05:40:37.591716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-17 05:40:37.591728 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:40:37.591756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-17 05:40:37.591769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-17 05:40:37.591803 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:40:37.591815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-17 05:40:37.591846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-17 05:40:37.591858 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:40:37.591872 | orchestrator | 2026-04-17 05:40:37.591891 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 05:40:37.591910 | orchestrator | Friday 17 April 2026 05:40:13 +0000 (0:00:02.116) 0:00:26.591 ********** 2026-04-17 05:40:37.591929 | orchestrator | 2026-04-17 05:40:37.591948 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 05:40:37.591967 | orchestrator | Friday 17 April 2026 05:40:14 +0000 (0:00:00.458) 0:00:27.050 ********** 2026-04-17 05:40:37.591986 | orchestrator | 2026-04-17 05:40:37.592002 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 05:40:37.592015 | orchestrator | Friday 17 April 2026 05:40:14 +0000 (0:00:00.451) 0:00:27.502 ********** 2026-04-17 05:40:37.592027 | orchestrator | 2026-04-17 05:40:37.592039 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-17 05:40:37.592051 | orchestrator | Friday 17 April 2026 05:40:15 +0000 (0:00:00.808) 0:00:28.311 ********** 2026-04-17 05:40:37.592064 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:40:37.592076 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:40:37.592088 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:40:37.592124 | orchestrator | 2026-04-17 05:40:37.592137 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-17 05:40:37.592295 | orchestrator | Friday 17 April 2026 05:40:25 +0000 (0:00:10.454) 0:00:38.765 ********** 2026-04-17 05:40:37.592309 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:40:37.592320 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:40:37.592331 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:40:37.592341 | orchestrator | 2026-04-17 05:40:37.592352 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:40:37.592365 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:40:37.592377 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:40:37.592399 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:40:37.592410 | orchestrator | 2026-04-17 05:40:37.592421 | orchestrator | 2026-04-17 05:40:37.592439 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:40:37.592451 | orchestrator | Friday 17 April 2026 05:40:37 +0000 (0:00:11.188) 0:00:49.954 ********** 2026-04-17 05:40:37.592462 | orchestrator | =============================================================================== 2026-04-17 05:40:37.592472 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.19s 2026-04-17 05:40:37.592483 | orchestrator | redis : Restart redis container ---------------------------------------- 10.45s 2026-04-17 05:40:37.592493 | orchestrator | redis : Copying over redis config files --------------------------------- 4.17s 2026-04-17 05:40:37.592504 | orchestrator | redis : Copying over default config.json files -------------------------- 4.11s 2026-04-17 05:40:37.592514 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.15s 2026-04-17 05:40:37.592525 | orchestrator | redis : include_tasks --------------------------------------------------- 2.79s 2026-04-17 05:40:37.592535 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.59s 2026-04-17 05:40:37.592546 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.54s 2026-04-17 05:40:37.592557 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.12s 2026-04-17 05:40:37.592567 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.90s 2026-04-17 05:40:37.592578 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.72s 2026-04-17 05:40:37.592588 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.44s 2026-04-17 05:40:37.830486 | orchestrator | + osism apply -a upgrade mariadb 2026-04-17 05:40:39.267024 | orchestrator | 2026-04-17 05:40:39 | INFO  | Prepare task for execution of mariadb. 2026-04-17 05:40:39.339283 | orchestrator | 2026-04-17 05:40:39 | INFO  | Task fca06aa6-e8ab-4fb1-b0af-d7ca00c24362 (mariadb) was prepared for execution. 2026-04-17 05:40:39.339358 | orchestrator | 2026-04-17 05:40:39 | INFO  | It takes a moment until task fca06aa6-e8ab-4fb1-b0af-d7ca00c24362 (mariadb) has been started and output is visible here. 2026-04-17 05:40:54.271882 | orchestrator | 2026-04-17 05:40:54.272062 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:40:54.272082 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 05:40:54.272096 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 05:40:54.272118 | orchestrator | 2026-04-17 05:40:54.272130 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:40:54.272140 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 05:40:54.272151 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 05:40:54.272172 | orchestrator | Friday 17 April 2026 05:40:44 +0000 (0:00:01.507) 0:00:01.507 ********** 2026-04-17 05:40:54.272183 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:40:54.272194 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:40:54.272206 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:40:54.272217 | orchestrator | 2026-04-17 05:40:54.272228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:40:54.272239 | orchestrator | Friday 17 April 2026 05:40:45 +0000 (0:00:01.093) 0:00:02.601 ********** 2026-04-17 05:40:54.272249 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-17 05:40:54.272261 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-17 05:40:54.272297 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-17 05:40:54.272308 | orchestrator | 2026-04-17 05:40:54.272319 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-17 05:40:54.272329 | orchestrator | 2026-04-17 05:40:54.272340 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-17 05:40:54.272351 | orchestrator | Friday 17 April 2026 05:40:46 +0000 (0:00:00.868) 0:00:03.469 ********** 2026-04-17 05:40:54.272361 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:40:54.272372 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 05:40:54.272382 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 05:40:54.272393 | orchestrator | 2026-04-17 05:40:54.272403 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 05:40:54.272415 | orchestrator | Friday 17 April 2026 05:40:46 +0000 (0:00:00.406) 0:00:03.876 ********** 2026-04-17 05:40:54.272427 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:40:54.272440 | orchestrator | 2026-04-17 05:40:54.272451 | orchestrator | TASK [mariadb : Remove mariadb-clustercheck] *********************************** 2026-04-17 05:40:54.272464 | orchestrator | Friday 17 April 2026 05:40:48 +0000 (0:00:01.381) 0:00:05.257 ********** 2026-04-17 05:40:54.272475 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:40:54.272487 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:40:54.272499 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:40:54.272510 | orchestrator | 2026-04-17 05:40:54.272523 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-17 05:40:54.272535 | orchestrator | Friday 17 April 2026 05:40:50 +0000 (0:00:02.175) 0:00:07.432 ********** 2026-04-17 05:40:54.272589 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:40:54.272608 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:40:54.272635 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:40:54.272649 | orchestrator | 2026-04-17 05:40:54.272660 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-17 05:40:54.272671 | orchestrator | Friday 17 April 2026 05:40:53 +0000 (0:00:03.026) 0:00:10.458 ********** 2026-04-17 05:40:54.272682 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:40:54.272692 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:40:54.272703 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:40:54.272713 | orchestrator | 2026-04-17 05:40:54.272724 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-17 05:40:54.272742 | orchestrator | Friday 17 April 2026 05:40:54 +0000 (0:00:00.751) 0:00:11.210 ********** 2026-04-17 05:41:08.031398 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:08.031539 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:08.031556 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:41:08.031569 | orchestrator | 2026-04-17 05:41:08.031580 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-17 05:41:08.031593 | orchestrator | Friday 17 April 2026 05:40:55 +0000 (0:00:01.241) 0:00:12.452 ********** 2026-04-17 05:41:08.031610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:41:08.031642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:41:08.031684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:41:08.031697 | orchestrator | 2026-04-17 05:41:08.031709 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-17 05:41:08.031720 | orchestrator | Friday 17 April 2026 05:40:59 +0000 (0:00:03.625) 0:00:16.077 ********** 2026-04-17 05:41:08.031732 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:08.031743 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:08.031754 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:41:08.031765 | orchestrator | 2026-04-17 05:41:08.031776 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-17 05:41:08.031787 | orchestrator | Friday 17 April 2026 05:41:00 +0000 (0:00:01.092) 0:00:17.170 ********** 2026-04-17 05:41:08.031797 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:41:08.031808 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:41:08.031819 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:41:08.031830 | orchestrator | 2026-04-17 05:41:08.031840 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 05:41:08.031851 | orchestrator | Friday 17 April 2026 05:41:04 +0000 (0:00:04.429) 0:00:21.599 ********** 2026-04-17 05:41:08.031867 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:41:08.031879 | orchestrator | 2026-04-17 05:41:08.031890 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-17 05:41:08.031900 | orchestrator | Friday 17 April 2026 05:41:05 +0000 (0:00:01.017) 0:00:22.617 ********** 2026-04-17 05:41:08.031992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:10.985725 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:10.985832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:10.985852 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:10.985883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:10.985918 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:10.985980 | orchestrator | 2026-04-17 05:41:10.985995 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-17 05:41:10.986007 | orchestrator | Friday 17 April 2026 05:41:08 +0000 (0:00:02.880) 0:00:25.497 ********** 2026-04-17 05:41:10.986103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:10.986119 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:10.986139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:10.986160 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:10.986182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:17.677722 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:17.677866 | orchestrator | 2026-04-17 05:41:17.677974 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-17 05:41:17.678006 | orchestrator | Friday 17 April 2026 05:41:11 +0000 (0:00:02.534) 0:00:28.032 ********** 2026-04-17 05:41:17.678112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:17.678168 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:17.678192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:17.678213 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:17.678273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:17.678309 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:17.678329 | orchestrator | 2026-04-17 05:41:17.678349 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-17 05:41:17.678368 | orchestrator | Friday 17 April 2026 05:41:14 +0000 (0:00:03.476) 0:00:31.509 ********** 2026-04-17 05:41:17.678390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:41:17.678440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:41:21.686501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 05:41:21.686606 | orchestrator | 2026-04-17 05:41:21.686621 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-17 05:41:21.686634 | orchestrator | Friday 17 April 2026 05:41:17 +0000 (0:00:03.425) 0:00:34.934 ********** 2026-04-17 05:41:21.686645 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:41:21.686655 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:41:21.686665 | orchestrator | } 2026-04-17 05:41:21.686675 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:41:21.686685 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:41:21.686694 | orchestrator | } 2026-04-17 05:41:21.686703 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:41:21.686713 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:41:21.686722 | orchestrator | } 2026-04-17 05:41:21.686732 | orchestrator | 2026-04-17 05:41:21.686742 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:41:21.686751 | orchestrator | Friday 17 April 2026 05:41:18 +0000 (0:00:00.415) 0:00:35.350 ********** 2026-04-17 05:41:21.686796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:21.686829 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:21.686841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:21.686851 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:21.686867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:21.686932 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:21.686943 | orchestrator | 2026-04-17 05:41:21.686953 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-17 05:41:21.687026 | orchestrator | Friday 17 April 2026 05:41:21 +0000 (0:00:03.279) 0:00:38.630 ********** 2026-04-17 05:41:31.469442 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.469560 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.469577 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.469590 | orchestrator | 2026-04-17 05:41:31.469602 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-17 05:41:31.469615 | orchestrator | Friday 17 April 2026 05:41:22 +0000 (0:00:00.628) 0:00:39.258 ********** 2026-04-17 05:41:31.469626 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.469637 | orchestrator | 2026-04-17 05:41:31.469649 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-17 05:41:31.469660 | orchestrator | Friday 17 April 2026 05:41:22 +0000 (0:00:00.133) 0:00:39.392 ********** 2026-04-17 05:41:31.469671 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.469682 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.469694 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.469705 | orchestrator | 2026-04-17 05:41:31.469717 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-17 05:41:31.469728 | orchestrator | Friday 17 April 2026 05:41:22 +0000 (0:00:00.344) 0:00:39.736 ********** 2026-04-17 05:41:31.469739 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.469749 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.469760 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.469771 | orchestrator | 2026-04-17 05:41:31.469783 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-17 05:41:31.469794 | orchestrator | Friday 17 April 2026 05:41:23 +0000 (0:00:00.381) 0:00:40.118 ********** 2026-04-17 05:41:31.469805 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.469817 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.469828 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.469887 | orchestrator | 2026-04-17 05:41:31.469900 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-17 05:41:31.469911 | orchestrator | Friday 17 April 2026 05:41:23 +0000 (0:00:00.652) 0:00:40.770 ********** 2026-04-17 05:41:31.469923 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.469934 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.469945 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.469957 | orchestrator | 2026-04-17 05:41:31.469968 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-17 05:41:31.470006 | orchestrator | Friday 17 April 2026 05:41:24 +0000 (0:00:00.359) 0:00:41.130 ********** 2026-04-17 05:41:31.470081 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470094 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470106 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470118 | orchestrator | 2026-04-17 05:41:31.470130 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-17 05:41:31.470141 | orchestrator | Friday 17 April 2026 05:41:24 +0000 (0:00:00.389) 0:00:41.519 ********** 2026-04-17 05:41:31.470152 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470163 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470175 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470186 | orchestrator | 2026-04-17 05:41:31.470197 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-17 05:41:31.470209 | orchestrator | Friday 17 April 2026 05:41:24 +0000 (0:00:00.381) 0:00:41.901 ********** 2026-04-17 05:41:31.470221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 05:41:31.470233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 05:41:31.470245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 05:41:31.470256 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470267 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 05:41:31.470278 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 05:41:31.470290 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 05:41:31.470301 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470312 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 05:41:31.470323 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 05:41:31.470335 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 05:41:31.470346 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470357 | orchestrator | 2026-04-17 05:41:31.470369 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-17 05:41:31.470380 | orchestrator | Friday 17 April 2026 05:41:25 +0000 (0:00:00.742) 0:00:42.643 ********** 2026-04-17 05:41:31.470392 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470403 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470414 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470425 | orchestrator | 2026-04-17 05:41:31.470450 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-17 05:41:31.470462 | orchestrator | Friday 17 April 2026 05:41:26 +0000 (0:00:00.369) 0:00:43.013 ********** 2026-04-17 05:41:31.470473 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470485 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470496 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470507 | orchestrator | 2026-04-17 05:41:31.470519 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-17 05:41:31.470530 | orchestrator | Friday 17 April 2026 05:41:26 +0000 (0:00:00.373) 0:00:43.386 ********** 2026-04-17 05:41:31.470541 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470553 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470564 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470576 | orchestrator | 2026-04-17 05:41:31.470587 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-17 05:41:31.470599 | orchestrator | Friday 17 April 2026 05:41:27 +0000 (0:00:00.622) 0:00:44.009 ********** 2026-04-17 05:41:31.470611 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470622 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470634 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470645 | orchestrator | 2026-04-17 05:41:31.470656 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-17 05:41:31.470688 | orchestrator | Friday 17 April 2026 05:41:27 +0000 (0:00:00.387) 0:00:44.396 ********** 2026-04-17 05:41:31.470722 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470733 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470744 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470755 | orchestrator | 2026-04-17 05:41:31.470767 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-17 05:41:31.470778 | orchestrator | Friday 17 April 2026 05:41:27 +0000 (0:00:00.346) 0:00:44.743 ********** 2026-04-17 05:41:31.470789 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470799 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470810 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470821 | orchestrator | 2026-04-17 05:41:31.470864 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-17 05:41:31.470876 | orchestrator | Friday 17 April 2026 05:41:28 +0000 (0:00:00.348) 0:00:45.091 ********** 2026-04-17 05:41:31.470887 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470897 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470909 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470919 | orchestrator | 2026-04-17 05:41:31.470930 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-17 05:41:31.470941 | orchestrator | Friday 17 April 2026 05:41:28 +0000 (0:00:00.622) 0:00:45.714 ********** 2026-04-17 05:41:31.470952 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.470962 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:31.470973 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:31.470984 | orchestrator | 2026-04-17 05:41:31.470995 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-17 05:41:31.471006 | orchestrator | Friday 17 April 2026 05:41:29 +0000 (0:00:00.352) 0:00:46.067 ********** 2026-04-17 05:41:31.471024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:31.471040 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:31.471116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:34.998907 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:34.999015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:34.999037 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:34.999050 | orchestrator | 2026-04-17 05:41:34.999062 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-17 05:41:34.999075 | orchestrator | Friday 17 April 2026 05:41:31 +0000 (0:00:02.573) 0:00:48.640 ********** 2026-04-17 05:41:34.999086 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:34.999096 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:34.999107 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:34.999117 | orchestrator | 2026-04-17 05:41:34.999128 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-17 05:41:34.999164 | orchestrator | Friday 17 April 2026 05:41:32 +0000 (0:00:00.375) 0:00:49.016 ********** 2026-04-17 05:41:34.999210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:34.999225 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:41:34.999237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:34.999249 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:41:34.999266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 05:41:34.999285 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:41:34.999297 | orchestrator | 2026-04-17 05:41:34.999309 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-17 05:41:34.999320 | orchestrator | Friday 17 April 2026 05:41:34 +0000 (0:00:02.799) 0:00:51.815 ********** 2026-04-17 05:41:34.999338 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.100065 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.100187 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.100204 | orchestrator | 2026-04-17 05:43:38.100217 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-17 05:43:38.100230 | orchestrator | Friday 17 April 2026 05:41:35 +0000 (0:00:00.808) 0:00:52.624 ********** 2026-04-17 05:43:38.100241 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.100300 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.100312 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.100323 | orchestrator | 2026-04-17 05:43:38.100335 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-17 05:43:38.100347 | orchestrator | Friday 17 April 2026 05:41:36 +0000 (0:00:00.345) 0:00:52.969 ********** 2026-04-17 05:43:38.100358 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.100369 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.100380 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.100391 | orchestrator | 2026-04-17 05:43:38.100402 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-17 05:43:38.100413 | orchestrator | Friday 17 April 2026 05:41:36 +0000 (0:00:00.352) 0:00:53.322 ********** 2026-04-17 05:43:38.100424 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.100435 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.100446 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.100457 | orchestrator | 2026-04-17 05:43:38.100468 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-17 05:43:38.100479 | orchestrator | Friday 17 April 2026 05:41:37 +0000 (0:00:01.208) 0:00:54.530 ********** 2026-04-17 05:43:38.100490 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.100500 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.100511 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.100549 | orchestrator | 2026-04-17 05:43:38.100561 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-17 05:43:38.100572 | orchestrator | Friday 17 April 2026 05:41:38 +0000 (0:00:00.736) 0:00:55.266 ********** 2026-04-17 05:43:38.100583 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.100597 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.100610 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.100622 | orchestrator | 2026-04-17 05:43:38.100634 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-17 05:43:38.100647 | orchestrator | Friday 17 April 2026 05:41:39 +0000 (0:00:01.158) 0:00:56.425 ********** 2026-04-17 05:43:38.100659 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.100672 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.100685 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.100697 | orchestrator | 2026-04-17 05:43:38.100709 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-17 05:43:38.100722 | orchestrator | Friday 17 April 2026 05:41:39 +0000 (0:00:00.381) 0:00:56.807 ********** 2026-04-17 05:43:38.100734 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.100746 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.100758 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.100770 | orchestrator | 2026-04-17 05:43:38.100782 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-17 05:43:38.100795 | orchestrator | Friday 17 April 2026 05:41:40 +0000 (0:00:00.384) 0:00:57.192 ********** 2026-04-17 05:43:38.100807 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.100820 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.100832 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.100845 | orchestrator | 2026-04-17 05:43:38.100857 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-17 05:43:38.100870 | orchestrator | Friday 17 April 2026 05:41:41 +0000 (0:00:00.882) 0:00:58.074 ********** 2026-04-17 05:43:38.100882 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.100909 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.100922 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.100935 | orchestrator | 2026-04-17 05:43:38.100947 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-17 05:43:38.100958 | orchestrator | Friday 17 April 2026 05:41:41 +0000 (0:00:00.661) 0:00:58.736 ********** 2026-04-17 05:43:38.100969 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.100980 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.100990 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.101001 | orchestrator | 2026-04-17 05:43:38.101012 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-17 05:43:38.101022 | orchestrator | Friday 17 April 2026 05:41:42 +0000 (0:00:00.394) 0:00:59.131 ********** 2026-04-17 05:43:38.101033 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.101044 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.101054 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.101065 | orchestrator | 2026-04-17 05:43:38.101076 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-17 05:43:38.101087 | orchestrator | Friday 17 April 2026 05:41:44 +0000 (0:00:02.336) 0:01:01.467 ********** 2026-04-17 05:43:38.101097 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.101108 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.101119 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.101129 | orchestrator | 2026-04-17 05:43:38.101140 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-17 05:43:38.101151 | orchestrator | Friday 17 April 2026 05:41:44 +0000 (0:00:00.393) 0:01:01.861 ********** 2026-04-17 05:43:38.101161 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.101172 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.101183 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.101194 | orchestrator | 2026-04-17 05:43:38.101205 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-17 05:43:38.101227 | orchestrator | Friday 17 April 2026 05:41:45 +0000 (0:00:00.694) 0:01:02.556 ********** 2026-04-17 05:43:38.101246 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.101302 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.101321 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.101338 | orchestrator | 2026-04-17 05:43:38.101355 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 05:43:38.101366 | orchestrator | Friday 17 April 2026 05:41:46 +0000 (0:00:00.771) 0:01:03.327 ********** 2026-04-17 05:43:38.101377 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.101388 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.101398 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.101427 | orchestrator | 2026-04-17 05:43:38.101439 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 05:43:38.101450 | orchestrator | Friday 17 April 2026 05:41:46 +0000 (0:00:00.333) 0:01:03.660 ********** 2026-04-17 05:43:38.101460 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.101471 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.101482 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.101492 | orchestrator | 2026-04-17 05:43:38.101503 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-17 05:43:38.101513 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 05:43:38.101524 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 05:43:38.101545 | orchestrator | Friday 17 April 2026 05:41:47 +0000 (0:00:01.034) 0:01:04.695 ********** 2026-04-17 05:43:38.101556 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:43:38.101567 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:43:38.101577 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:43:38.101588 | orchestrator | 2026-04-17 05:43:38.101598 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-17 05:43:38.101609 | orchestrator | Friday 17 April 2026 05:41:48 +0000 (0:00:00.415) 0:01:05.110 ********** 2026-04-17 05:43:38.101620 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:43:38.101630 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:43:38.101641 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:43:38.101652 | orchestrator | 2026-04-17 05:43:38.101662 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-17 05:43:38.101673 | orchestrator | 2026-04-17 05:43:38.101684 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 05:43:38.101694 | orchestrator | Friday 17 April 2026 05:41:49 +0000 (0:00:01.082) 0:01:06.193 ********** 2026-04-17 05:43:38.101705 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:43:38.101716 | orchestrator | 2026-04-17 05:43:38.101726 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 05:43:38.101737 | orchestrator | Friday 17 April 2026 05:42:15 +0000 (0:00:26.460) 0:01:32.654 ********** 2026-04-17 05:43:38.101748 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.101758 | orchestrator | 2026-04-17 05:43:38.101770 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 05:43:38.101780 | orchestrator | Friday 17 April 2026 05:42:21 +0000 (0:00:05.620) 0:01:38.274 ********** 2026-04-17 05:43:38.101791 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:43:38.101801 | orchestrator | 2026-04-17 05:43:38.101812 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-17 05:43:38.101823 | orchestrator | 2026-04-17 05:43:38.101833 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 05:43:38.101844 | orchestrator | Friday 17 April 2026 05:42:23 +0000 (0:00:02.485) 0:01:40.760 ********** 2026-04-17 05:43:38.101854 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:43:38.101865 | orchestrator | 2026-04-17 05:43:38.101876 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 05:43:38.101895 | orchestrator | Friday 17 April 2026 05:42:51 +0000 (0:00:27.450) 0:02:08.210 ********** 2026-04-17 05:43:38.101905 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-04-17 05:43:38.101917 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.101928 | orchestrator | 2026-04-17 05:43:38.101938 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 05:43:38.101955 | orchestrator | Friday 17 April 2026 05:42:59 +0000 (0:00:08.148) 0:02:16.359 ********** 2026-04-17 05:43:38.101967 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:43:38.101977 | orchestrator | 2026-04-17 05:43:38.101988 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-17 05:43:38.101999 | orchestrator | 2026-04-17 05:43:38.102009 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 05:43:38.102088 | orchestrator | Friday 17 April 2026 05:43:01 +0000 (0:00:02.570) 0:02:18.929 ********** 2026-04-17 05:43:38.102101 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:43:38.102111 | orchestrator | 2026-04-17 05:43:38.102122 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 05:43:38.102133 | orchestrator | Friday 17 April 2026 05:43:28 +0000 (0:00:26.602) 0:02:45.532 ********** 2026-04-17 05:43:38.102153 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.102164 | orchestrator | 2026-04-17 05:43:38.102175 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 05:43:38.102185 | orchestrator | Friday 17 April 2026 05:43:33 +0000 (0:00:05.210) 0:02:50.742 ********** 2026-04-17 05:43:38.102196 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:43:38.102207 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-17 05:43:38.102218 | orchestrator | 2026-04-17 05:43:38.102228 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-17 05:43:38.102239 | orchestrator | skipping: no hosts matched 2026-04-17 05:43:38.102270 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-17 05:43:38.102281 | orchestrator | mariadb_bootstrap_restart 2026-04-17 05:43:38.102291 | orchestrator | 2026-04-17 05:43:38.102302 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-17 05:43:38.102313 | orchestrator | skipping: no hosts matched 2026-04-17 05:43:38.102324 | orchestrator | 2026-04-17 05:43:38.102334 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-17 05:43:38.102345 | orchestrator | 2026-04-17 05:43:38.102356 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-17 05:43:38.102367 | orchestrator | Friday 17 April 2026 05:43:37 +0000 (0:00:03.270) 0:02:54.013 ********** 2026-04-17 05:43:38.102377 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:43:38.102388 | orchestrator | 2026-04-17 05:43:38.102399 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-17 05:43:38.102420 | orchestrator | Friday 17 April 2026 05:43:38 +0000 (0:00:01.026) 0:02:55.039 ********** 2026-04-17 05:44:18.357431 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:44:18.357548 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:44:18.357565 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:44:18.357578 | orchestrator | 2026-04-17 05:44:18.357591 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-17 05:44:18.357603 | orchestrator | Friday 17 April 2026 05:43:40 +0000 (0:00:02.430) 0:02:57.470 ********** 2026-04-17 05:44:18.357614 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:44:18.357640 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:44:18.357651 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:44:18.357662 | orchestrator | 2026-04-17 05:44:18.357673 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-17 05:44:18.357685 | orchestrator | Friday 17 April 2026 05:43:42 +0000 (0:00:02.205) 0:02:59.675 ********** 2026-04-17 05:44:18.357695 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:44:18.357729 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:44:18.357740 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:44:18.357751 | orchestrator | 2026-04-17 05:44:18.357762 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-17 05:44:18.357773 | orchestrator | Friday 17 April 2026 05:43:44 +0000 (0:00:02.108) 0:03:01.783 ********** 2026-04-17 05:44:18.357784 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:44:18.357794 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:44:18.357805 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:44:18.357815 | orchestrator | 2026-04-17 05:44:18.357826 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-17 05:44:18.357837 | orchestrator | Friday 17 April 2026 05:43:46 +0000 (0:00:02.158) 0:03:03.942 ********** 2026-04-17 05:44:18.357848 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:44:18.357858 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:44:18.357869 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:44:18.357879 | orchestrator | 2026-04-17 05:44:18.357890 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-17 05:44:18.357902 | orchestrator | Friday 17 April 2026 05:43:52 +0000 (0:00:05.696) 0:03:09.638 ********** 2026-04-17 05:44:18.357912 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:44:18.357923 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:44:18.357934 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:44:18.357944 | orchestrator | 2026-04-17 05:44:18.357957 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-17 05:44:18.357969 | orchestrator | Friday 17 April 2026 05:43:55 +0000 (0:00:02.511) 0:03:12.149 ********** 2026-04-17 05:44:18.357982 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:44:18.357994 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:44:18.358006 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:44:18.358102 | orchestrator | 2026-04-17 05:44:18.358119 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-17 05:44:18.358132 | orchestrator | Friday 17 April 2026 05:43:55 +0000 (0:00:00.637) 0:03:12.787 ********** 2026-04-17 05:44:18.358145 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:44:18.358157 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:44:18.358170 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:44:18.358183 | orchestrator | 2026-04-17 05:44:18.358195 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-17 05:44:18.358207 | orchestrator | Friday 17 April 2026 05:43:58 +0000 (0:00:02.900) 0:03:15.688 ********** 2026-04-17 05:44:18.358219 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:44:18.358232 | orchestrator | 2026-04-17 05:44:18.358244 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-04-17 05:44:18.358271 | orchestrator | Friday 17 April 2026 05:43:59 +0000 (0:00:01.215) 0:03:16.903 ********** 2026-04-17 05:44:18.358284 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:44:18.358296 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:44:18.358308 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:44:18.358319 | orchestrator | 2026-04-17 05:44:18.358330 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:44:18.358341 | orchestrator | testbed-node-0 : ok=35  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-17 05:44:18.358354 | orchestrator | testbed-node-1 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-17 05:44:18.358365 | orchestrator | testbed-node-2 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-17 05:44:18.358376 | orchestrator | 2026-04-17 05:44:18.358386 | orchestrator | 2026-04-17 05:44:18.358397 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:44:18.358408 | orchestrator | Friday 17 April 2026 05:44:17 +0000 (0:00:17.873) 0:03:34.777 ********** 2026-04-17 05:44:18.358429 | orchestrator | =============================================================================== 2026-04-17 05:44:18.358440 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 80.51s 2026-04-17 05:44:18.358451 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 18.98s 2026-04-17 05:44:18.358461 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.87s 2026-04-17 05:44:18.358472 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 8.33s 2026-04-17 05:44:18.358483 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.70s 2026-04-17 05:44:18.358494 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.43s 2026-04-17 05:44:18.358504 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.63s 2026-04-17 05:44:18.358515 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.48s 2026-04-17 05:44:18.358526 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.43s 2026-04-17 05:44:18.358556 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.28s 2026-04-17 05:44:18.358568 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.03s 2026-04-17 05:44:18.358578 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.90s 2026-04-17 05:44:18.358589 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.88s 2026-04-17 05:44:18.358600 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.80s 2026-04-17 05:44:18.358611 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.57s 2026-04-17 05:44:18.358622 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.53s 2026-04-17 05:44:18.358633 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.51s 2026-04-17 05:44:18.358643 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.43s 2026-04-17 05:44:18.358654 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 2.34s 2026-04-17 05:44:18.358665 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.20s 2026-04-17 05:44:18.621772 | orchestrator | + osism apply -a upgrade rabbitmq 2026-04-17 05:44:20.002978 | orchestrator | 2026-04-17 05:44:20 | INFO  | Prepare task for execution of rabbitmq. 2026-04-17 05:44:20.078351 | orchestrator | 2026-04-17 05:44:20 | INFO  | Task 8523b018-f138-42b0-a71f-40847cc92f76 (rabbitmq) was prepared for execution. 2026-04-17 05:44:20.078447 | orchestrator | 2026-04-17 05:44:20 | INFO  | It takes a moment until task 8523b018-f138-42b0-a71f-40847cc92f76 (rabbitmq) has been started and output is visible here. 2026-04-17 05:45:04.277049 | orchestrator | 2026-04-17 05:45:04.277171 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:45:04.277188 | orchestrator | 2026-04-17 05:45:04.277200 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:45:04.277212 | orchestrator | Friday 17 April 2026 05:44:25 +0000 (0:00:01.857) 0:00:01.857 ********** 2026-04-17 05:45:04.277223 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:45:04.277235 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:45:04.277245 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:45:04.277256 | orchestrator | 2026-04-17 05:45:04.277267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:45:04.277278 | orchestrator | Friday 17 April 2026 05:44:27 +0000 (0:00:01.761) 0:00:03.618 ********** 2026-04-17 05:45:04.277289 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-17 05:45:04.277301 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-17 05:45:04.277312 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-17 05:45:04.277322 | orchestrator | 2026-04-17 05:45:04.277333 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-17 05:45:04.277368 | orchestrator | 2026-04-17 05:45:04.277380 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 05:45:04.277391 | orchestrator | Friday 17 April 2026 05:44:29 +0000 (0:00:02.040) 0:00:05.659 ********** 2026-04-17 05:45:04.277402 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:45:04.277413 | orchestrator | 2026-04-17 05:45:04.277438 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-17 05:45:04.277450 | orchestrator | Friday 17 April 2026 05:44:33 +0000 (0:00:03.805) 0:00:09.464 ********** 2026-04-17 05:45:04.277460 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:45:04.277471 | orchestrator | 2026-04-17 05:45:04.277481 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-17 05:45:04.277492 | orchestrator | Friday 17 April 2026 05:44:35 +0000 (0:00:02.891) 0:00:12.356 ********** 2026-04-17 05:45:04.277502 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:45:04.277513 | orchestrator | 2026-04-17 05:45:04.277524 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-17 05:45:04.277536 | orchestrator | Friday 17 April 2026 05:44:39 +0000 (0:00:03.130) 0:00:15.486 ********** 2026-04-17 05:45:04.277550 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:45:04.277563 | orchestrator | 2026-04-17 05:45:04.277576 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-17 05:45:04.277589 | orchestrator | Friday 17 April 2026 05:44:48 +0000 (0:00:09.817) 0:00:25.304 ********** 2026-04-17 05:45:04.277602 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 05:45:04.277614 | orchestrator |  "changed": false, 2026-04-17 05:45:04.277627 | orchestrator |  "msg": "All assertions passed" 2026-04-17 05:45:04.277640 | orchestrator | } 2026-04-17 05:45:04.277653 | orchestrator | 2026-04-17 05:45:04.277666 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-17 05:45:04.277679 | orchestrator | Friday 17 April 2026 05:44:50 +0000 (0:00:01.327) 0:00:26.631 ********** 2026-04-17 05:45:04.277690 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 05:45:04.277703 | orchestrator |  "changed": false, 2026-04-17 05:45:04.277715 | orchestrator |  "msg": "All assertions passed" 2026-04-17 05:45:04.277727 | orchestrator | } 2026-04-17 05:45:04.277739 | orchestrator | 2026-04-17 05:45:04.277752 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 05:45:04.277765 | orchestrator | Friday 17 April 2026 05:44:51 +0000 (0:00:01.693) 0:00:28.325 ********** 2026-04-17 05:45:04.277778 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:45:04.277790 | orchestrator | 2026-04-17 05:45:04.277802 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-17 05:45:04.277812 | orchestrator | Friday 17 April 2026 05:44:53 +0000 (0:00:02.047) 0:00:30.373 ********** 2026-04-17 05:45:04.277823 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:45:04.277834 | orchestrator | 2026-04-17 05:45:04.277844 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-17 05:45:04.277855 | orchestrator | Friday 17 April 2026 05:44:56 +0000 (0:00:02.295) 0:00:32.669 ********** 2026-04-17 05:45:04.277866 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:45:04.277876 | orchestrator | 2026-04-17 05:45:04.277887 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-17 05:45:04.277930 | orchestrator | Friday 17 April 2026 05:44:59 +0000 (0:00:02.792) 0:00:35.461 ********** 2026-04-17 05:45:04.277941 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:45:04.277952 | orchestrator | 2026-04-17 05:45:04.277962 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-17 05:45:04.277973 | orchestrator | Friday 17 April 2026 05:45:00 +0000 (0:00:01.660) 0:00:37.122 ********** 2026-04-17 05:45:04.278009 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:04.278095 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:04.278111 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:04.278123 | orchestrator | 2026-04-17 05:45:04.278134 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-17 05:45:04.278145 | orchestrator | Friday 17 April 2026 05:45:02 +0000 (0:00:02.229) 0:00:39.352 ********** 2026-04-17 05:45:04.278157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:04.278186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:25.069212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:25.069401 | orchestrator | 2026-04-17 05:45:25.069420 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-17 05:45:25.069433 | orchestrator | Friday 17 April 2026 05:45:05 +0000 (0:00:02.413) 0:00:41.766 ********** 2026-04-17 05:45:25.069444 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 05:45:25.069456 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 05:45:25.069470 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 05:45:25.069482 | orchestrator | 2026-04-17 05:45:25.069512 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-17 05:45:25.069536 | orchestrator | Friday 17 April 2026 05:45:07 +0000 (0:00:02.432) 0:00:44.198 ********** 2026-04-17 05:45:25.069548 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 05:45:25.069561 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 05:45:25.069573 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 05:45:25.069585 | orchestrator | 2026-04-17 05:45:25.069598 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-17 05:45:25.069610 | orchestrator | Friday 17 April 2026 05:45:10 +0000 (0:00:02.743) 0:00:46.941 ********** 2026-04-17 05:45:25.069642 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 05:45:25.069655 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 05:45:25.069667 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 05:45:25.069679 | orchestrator | 2026-04-17 05:45:25.069689 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-17 05:45:25.069700 | orchestrator | Friday 17 April 2026 05:45:12 +0000 (0:00:02.407) 0:00:49.349 ********** 2026-04-17 05:45:25.069711 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 05:45:25.069721 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 05:45:25.069732 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 05:45:25.069742 | orchestrator | 2026-04-17 05:45:25.069753 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-17 05:45:25.069764 | orchestrator | Friday 17 April 2026 05:45:15 +0000 (0:00:02.872) 0:00:52.222 ********** 2026-04-17 05:45:25.069775 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 05:45:25.069785 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 05:45:25.069796 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 05:45:25.069807 | orchestrator | 2026-04-17 05:45:25.069839 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-17 05:45:25.069850 | orchestrator | Friday 17 April 2026 05:45:18 +0000 (0:00:02.384) 0:00:54.606 ********** 2026-04-17 05:45:25.069861 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 05:45:25.069871 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 05:45:25.069882 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 05:45:25.069893 | orchestrator | 2026-04-17 05:45:25.069903 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 05:45:25.069914 | orchestrator | Friday 17 April 2026 05:45:20 +0000 (0:00:02.447) 0:00:57.054 ********** 2026-04-17 05:45:25.069924 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:45:25.069935 | orchestrator | 2026-04-17 05:45:25.069966 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-17 05:45:25.069978 | orchestrator | Friday 17 April 2026 05:45:22 +0000 (0:00:02.013) 0:00:59.068 ********** 2026-04-17 05:45:25.069997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:25.070011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:25.070118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:25.070131 | orchestrator | 2026-04-17 05:45:25.070143 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-17 05:45:25.070154 | orchestrator | Friday 17 April 2026 05:45:24 +0000 (0:00:02.292) 0:01:01.360 ********** 2026-04-17 05:45:25.070182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:45:33.072144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:45:33.072242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:45:33.072275 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:45:33.072285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:45:33.072293 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:45:33.072302 | orchestrator | 2026-04-17 05:45:33.072311 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-17 05:45:33.072321 | orchestrator | Friday 17 April 2026 05:45:26 +0000 (0:00:01.596) 0:01:02.957 ********** 2026-04-17 05:45:33.072329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:45:33.072404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:45:33.072424 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:45:33.072433 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:45:33.072441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:45:33.072450 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:45:33.072458 | orchestrator | 2026-04-17 05:45:33.072466 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-17 05:45:33.072474 | orchestrator | Friday 17 April 2026 05:45:28 +0000 (0:00:01.973) 0:01:04.931 ********** 2026-04-17 05:45:33.072482 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:45:33.072491 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:45:33.072499 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:45:33.072506 | orchestrator | 2026-04-17 05:45:33.072514 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-17 05:45:33.072522 | orchestrator | Friday 17 April 2026 05:45:32 +0000 (0:00:03.599) 0:01:08.530 ********** 2026-04-17 05:45:33.072530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:45:33.072550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:47:16.889325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 05:47:16.889493 | orchestrator | 2026-04-17 05:47:16.889514 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-17 05:47:16.889528 | orchestrator | Friday 17 April 2026 05:45:34 +0000 (0:00:02.469) 0:01:11.000 ********** 2026-04-17 05:47:16.889540 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:47:16.889554 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:16.889565 | orchestrator | } 2026-04-17 05:47:16.889576 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:47:16.889587 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:16.889597 | orchestrator | } 2026-04-17 05:47:16.889608 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:47:16.889619 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:16.889629 | orchestrator | } 2026-04-17 05:47:16.889640 | orchestrator | 2026-04-17 05:47:16.889652 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:47:16.889663 | orchestrator | Friday 17 April 2026 05:45:36 +0000 (0:00:01.832) 0:01:12.833 ********** 2026-04-17 05:47:16.889675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:47:16.889688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:47:16.889725 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:47:16.889752 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:47:16.889784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 05:47:16.889797 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:47:16.889808 | orchestrator | 2026-04-17 05:47:16.889819 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-17 05:47:16.889829 | orchestrator | Friday 17 April 2026 05:45:38 +0000 (0:00:02.279) 0:01:15.112 ********** 2026-04-17 05:47:16.889840 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:47:16.889853 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:47:16.889864 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:47:16.889876 | orchestrator | 2026-04-17 05:47:16.889888 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 05:47:16.889899 | orchestrator | 2026-04-17 05:47:16.889911 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 05:47:16.889924 | orchestrator | Friday 17 April 2026 05:45:40 +0000 (0:00:01.978) 0:01:17.091 ********** 2026-04-17 05:47:16.889936 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:47:16.889949 | orchestrator | 2026-04-17 05:47:16.889961 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 05:47:16.889973 | orchestrator | Friday 17 April 2026 05:45:42 +0000 (0:00:02.144) 0:01:19.235 ********** 2026-04-17 05:47:16.889985 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:47:16.889997 | orchestrator | 2026-04-17 05:47:16.890009 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 05:47:16.890079 | orchestrator | Friday 17 April 2026 05:45:51 +0000 (0:00:08.440) 0:01:27.676 ********** 2026-04-17 05:47:16.890092 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:47:16.890105 | orchestrator | 2026-04-17 05:47:16.890117 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 05:47:16.890130 | orchestrator | Friday 17 April 2026 05:46:00 +0000 (0:00:09.012) 0:01:36.689 ********** 2026-04-17 05:47:16.890178 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:47:16.890191 | orchestrator | 2026-04-17 05:47:16.890204 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 05:47:16.890216 | orchestrator | 2026-04-17 05:47:16.890227 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 05:47:16.890247 | orchestrator | Friday 17 April 2026 05:46:09 +0000 (0:00:08.803) 0:01:45.492 ********** 2026-04-17 05:47:16.890258 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:47:16.890269 | orchestrator | 2026-04-17 05:47:16.890279 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 05:47:16.890290 | orchestrator | Friday 17 April 2026 05:46:10 +0000 (0:00:01.753) 0:01:47.246 ********** 2026-04-17 05:47:16.890300 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:47:16.890311 | orchestrator | 2026-04-17 05:47:16.890322 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 05:47:16.890333 | orchestrator | Friday 17 April 2026 05:46:19 +0000 (0:00:09.005) 0:01:56.252 ********** 2026-04-17 05:47:16.890343 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:47:16.890354 | orchestrator | 2026-04-17 05:47:16.890364 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 05:47:16.890375 | orchestrator | Friday 17 April 2026 05:46:34 +0000 (0:00:14.363) 0:02:10.615 ********** 2026-04-17 05:47:16.890386 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:47:16.890396 | orchestrator | 2026-04-17 05:47:16.890407 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 05:47:16.890417 | orchestrator | 2026-04-17 05:47:16.890428 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 05:47:16.890460 | orchestrator | Friday 17 April 2026 05:46:42 +0000 (0:00:08.770) 0:02:19.386 ********** 2026-04-17 05:47:16.890472 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:47:16.890483 | orchestrator | 2026-04-17 05:47:16.890493 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 05:47:16.890504 | orchestrator | Friday 17 April 2026 05:46:44 +0000 (0:00:01.647) 0:02:21.033 ********** 2026-04-17 05:47:16.890514 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:47:16.890525 | orchestrator | 2026-04-17 05:47:16.890535 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 05:47:16.890546 | orchestrator | Friday 17 April 2026 05:46:53 +0000 (0:00:08.659) 0:02:29.692 ********** 2026-04-17 05:47:16.890557 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:47:16.890567 | orchestrator | 2026-04-17 05:47:16.890578 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 05:47:16.890588 | orchestrator | Friday 17 April 2026 05:47:07 +0000 (0:00:14.251) 0:02:43.944 ********** 2026-04-17 05:47:16.890599 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:47:16.890610 | orchestrator | 2026-04-17 05:47:16.890626 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-17 05:47:16.890637 | orchestrator | 2026-04-17 05:47:16.890648 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-17 05:47:16.890667 | orchestrator | Friday 17 April 2026 05:47:16 +0000 (0:00:09.354) 0:02:53.298 ********** 2026-04-17 05:47:23.590809 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:47:23.590895 | orchestrator | 2026-04-17 05:47:23.590906 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-17 05:47:23.590913 | orchestrator | Friday 17 April 2026 05:47:18 +0000 (0:00:01.680) 0:02:54.979 ********** 2026-04-17 05:47:23.590920 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:47:23.590927 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:47:23.590933 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:47:23.590939 | orchestrator | 2026-04-17 05:47:23.590946 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:47:23.590953 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 05:47:23.590961 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 05:47:23.590967 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 05:47:23.590992 | orchestrator | 2026-04-17 05:47:23.590998 | orchestrator | 2026-04-17 05:47:23.591004 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:47:23.591011 | orchestrator | Friday 17 April 2026 05:47:23 +0000 (0:00:04.442) 0:02:59.421 ********** 2026-04-17 05:47:23.591017 | orchestrator | =============================================================================== 2026-04-17 05:47:23.591023 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.63s 2026-04-17 05:47:23.591029 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 26.93s 2026-04-17 05:47:23.591035 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 26.11s 2026-04-17 05:47:23.591041 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.82s 2026-04-17 05:47:23.591047 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.54s 2026-04-17 05:47:23.591053 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.44s 2026-04-17 05:47:23.591059 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.81s 2026-04-17 05:47:23.591065 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.60s 2026-04-17 05:47:23.591071 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.13s 2026-04-17 05:47:23.591077 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.89s 2026-04-17 05:47:23.591083 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.87s 2026-04-17 05:47:23.591089 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.79s 2026-04-17 05:47:23.591095 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.74s 2026-04-17 05:47:23.591102 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.47s 2026-04-17 05:47:23.591108 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.45s 2026-04-17 05:47:23.591114 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.43s 2026-04-17 05:47:23.591120 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.41s 2026-04-17 05:47:23.591126 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.41s 2026-04-17 05:47:23.591132 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.38s 2026-04-17 05:47:23.591138 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.30s 2026-04-17 05:47:23.850915 | orchestrator | + osism apply -a upgrade openvswitch 2026-04-17 05:47:25.199869 | orchestrator | 2026-04-17 05:47:25 | INFO  | Prepare task for execution of openvswitch. 2026-04-17 05:47:25.273363 | orchestrator | 2026-04-17 05:47:25 | INFO  | Task 718eb546-3f08-4f8a-9f0b-9a8e46f9dbaf (openvswitch) was prepared for execution. 2026-04-17 05:47:25.273464 | orchestrator | 2026-04-17 05:47:25 | INFO  | It takes a moment until task 718eb546-3f08-4f8a-9f0b-9a8e46f9dbaf (openvswitch) has been started and output is visible here. 2026-04-17 05:47:42.579559 | orchestrator | 2026-04-17 05:47:42.580635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:47:42.580730 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 05:47:42.580748 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 05:47:42.580772 | orchestrator | 2026-04-17 05:47:42.580784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:47:42.580795 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 05:47:42.580806 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 05:47:42.580873 | orchestrator | Friday 17 April 2026 05:47:30 +0000 (0:00:01.654) 0:00:01.654 ********** 2026-04-17 05:47:42.580886 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:47:42.580898 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:47:42.580909 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:47:42.580920 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:47:42.580931 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:47:42.580941 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:47:42.580952 | orchestrator | 2026-04-17 05:47:42.580963 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:47:42.580974 | orchestrator | Friday 17 April 2026 05:47:31 +0000 (0:00:01.347) 0:00:03.002 ********** 2026-04-17 05:47:42.580985 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 05:47:42.580996 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 05:47:42.581007 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 05:47:42.581017 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 05:47:42.581028 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 05:47:42.581039 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 05:47:42.581049 | orchestrator | 2026-04-17 05:47:42.581060 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-17 05:47:42.581071 | orchestrator | 2026-04-17 05:47:42.581082 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-17 05:47:42.581092 | orchestrator | Friday 17 April 2026 05:47:33 +0000 (0:00:01.376) 0:00:04.379 ********** 2026-04-17 05:47:42.581105 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:47:42.581117 | orchestrator | 2026-04-17 05:47:42.581128 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 05:47:42.581139 | orchestrator | Friday 17 April 2026 05:47:35 +0000 (0:00:02.133) 0:00:06.513 ********** 2026-04-17 05:47:42.581150 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-17 05:47:42.581161 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-17 05:47:42.581172 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-17 05:47:42.581183 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-17 05:47:42.581193 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-17 05:47:42.581204 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-17 05:47:42.581215 | orchestrator | 2026-04-17 05:47:42.581226 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 05:47:42.581236 | orchestrator | Friday 17 April 2026 05:47:37 +0000 (0:00:01.802) 0:00:08.316 ********** 2026-04-17 05:47:42.581248 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-17 05:47:42.581259 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-17 05:47:42.581270 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-17 05:47:42.581281 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-17 05:47:42.581291 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-17 05:47:42.581302 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-17 05:47:42.581312 | orchestrator | 2026-04-17 05:47:42.581323 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 05:47:42.581334 | orchestrator | Friday 17 April 2026 05:47:38 +0000 (0:00:01.873) 0:00:10.189 ********** 2026-04-17 05:47:42.581345 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-17 05:47:42.581356 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:47:42.581393 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-17 05:47:42.581411 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:47:42.581439 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-17 05:47:42.581454 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:47:42.581473 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-17 05:47:42.581491 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:47:42.581509 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-17 05:47:42.581525 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:47:42.581544 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-17 05:47:42.581561 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:47:42.581580 | orchestrator | 2026-04-17 05:47:42.581593 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-17 05:47:42.581665 | orchestrator | Friday 17 April 2026 05:47:40 +0000 (0:00:01.646) 0:00:11.835 ********** 2026-04-17 05:47:42.581678 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:47:42.581689 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:47:42.581700 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:47:42.581711 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:47:42.581722 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:47:42.581761 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:47:42.581773 | orchestrator | 2026-04-17 05:47:42.581784 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-17 05:47:42.581795 | orchestrator | Friday 17 April 2026 05:47:41 +0000 (0:00:01.104) 0:00:12.940 ********** 2026-04-17 05:47:42.581816 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:42.581836 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:42.581848 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:42.581860 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:42.581916 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:42.581941 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:44.910775 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:44.910881 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:44.910897 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:44.910933 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:44.910944 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:44.910972 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:44.910985 | orchestrator | 2026-04-17 05:47:44.911003 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-17 05:47:44.911016 | orchestrator | Friday 17 April 2026 05:47:43 +0000 (0:00:01.469) 0:00:14.410 ********** 2026-04-17 05:47:44.911028 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:44.911039 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:44.911059 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:44.911070 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:44.911082 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:44.911106 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:49.418976 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419088 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419127 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419140 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419151 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419196 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419209 | orchestrator | 2026-04-17 05:47:49.419222 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-17 05:47:49.419235 | orchestrator | Friday 17 April 2026 05:47:46 +0000 (0:00:03.594) 0:00:18.005 ********** 2026-04-17 05:47:49.419245 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:47:49.419257 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:47:49.419268 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:47:49.419279 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:47:49.419289 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:47:49.419300 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:47:49.419311 | orchestrator | 2026-04-17 05:47:49.419322 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-17 05:47:49.419341 | orchestrator | Friday 17 April 2026 05:47:47 +0000 (0:00:01.183) 0:00:19.188 ********** 2026-04-17 05:47:49.419400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:49.419462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:51.305236 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 05:47:51.305326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:51.305333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:51.305338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:51.305376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:51.305393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:51.305403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 05:47:51.305408 | orchestrator | 2026-04-17 05:47:51.305414 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-17 05:47:51.305420 | orchestrator | Friday 17 April 2026 05:47:50 +0000 (0:00:02.357) 0:00:21.546 ********** 2026-04-17 05:47:51.305426 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:47:51.305431 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:51.305436 | orchestrator | } 2026-04-17 05:47:51.305440 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:47:51.305444 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:51.305449 | orchestrator | } 2026-04-17 05:47:51.305453 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:47:51.305457 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:51.305461 | orchestrator | } 2026-04-17 05:47:51.305466 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 05:47:51.305470 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:51.305474 | orchestrator | } 2026-04-17 05:47:51.305478 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 05:47:51.305483 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:51.305487 | orchestrator | } 2026-04-17 05:47:51.305491 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 05:47:51.305495 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:47:51.305500 | orchestrator | } 2026-04-17 05:47:51.305504 | orchestrator | 2026-04-17 05:47:51.305508 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:47:51.305513 | orchestrator | Friday 17 April 2026 05:47:51 +0000 (0:00:00.743) 0:00:22.290 ********** 2026-04-17 05:47:51.305517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-17 05:47:51.305522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-17 05:47:51.305530 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:47:51.305537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-17 05:47:51.305546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-17 05:48:15.283826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-17 05:48:15.283938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-17 05:48:15.283954 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:48:15.283966 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:48:15.283976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-17 05:48:15.284003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-17 05:48:15.284033 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:48:15.284044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-17 05:48:15.284072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-17 05:48:15.284083 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:48:15.284093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-17 05:48:15.284103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-17 05:48:15.284112 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:48:15.284122 | orchestrator | 2026-04-17 05:48:15.284133 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 05:48:15.284144 | orchestrator | Friday 17 April 2026 05:47:53 +0000 (0:00:01.985) 0:00:24.275 ********** 2026-04-17 05:48:15.284159 | orchestrator | 2026-04-17 05:48:15.284169 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 05:48:15.284178 | orchestrator | Friday 17 April 2026 05:47:53 +0000 (0:00:00.383) 0:00:24.658 ********** 2026-04-17 05:48:15.284188 | orchestrator | 2026-04-17 05:48:15.284197 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 05:48:15.284207 | orchestrator | Friday 17 April 2026 05:47:53 +0000 (0:00:00.148) 0:00:24.807 ********** 2026-04-17 05:48:15.284216 | orchestrator | 2026-04-17 05:48:15.284226 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 05:48:15.284235 | orchestrator | Friday 17 April 2026 05:47:53 +0000 (0:00:00.147) 0:00:24.955 ********** 2026-04-17 05:48:15.284245 | orchestrator | 2026-04-17 05:48:15.284255 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 05:48:15.284269 | orchestrator | Friday 17 April 2026 05:47:53 +0000 (0:00:00.175) 0:00:25.130 ********** 2026-04-17 05:48:15.284353 | orchestrator | 2026-04-17 05:48:15.284376 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 05:48:15.284392 | orchestrator | Friday 17 April 2026 05:47:54 +0000 (0:00:00.164) 0:00:25.295 ********** 2026-04-17 05:48:15.284409 | orchestrator | 2026-04-17 05:48:15.284426 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-17 05:48:15.284443 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 05:48:15.284463 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 05:48:15.284496 | orchestrator | Friday 17 April 2026 05:47:54 +0000 (0:00:00.150) 0:00:25.446 ********** 2026-04-17 05:48:15.284512 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:48:15.284529 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:48:15.284547 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:48:15.284563 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:48:15.284578 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:48:15.284590 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:48:15.284601 | orchestrator | 2026-04-17 05:48:15.284612 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-17 05:48:15.284623 | orchestrator | Friday 17 April 2026 05:48:05 +0000 (0:00:10.821) 0:00:36.267 ********** 2026-04-17 05:48:15.284634 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:48:15.284646 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:48:15.284657 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:48:15.284668 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:48:15.284678 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:48:15.284689 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:48:15.284706 | orchestrator | 2026-04-17 05:48:15.284722 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-17 05:48:15.284739 | orchestrator | Friday 17 April 2026 05:48:06 +0000 (0:00:01.469) 0:00:37.737 ********** 2026-04-17 05:48:15.284758 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:48:15.284788 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:48:30.219345 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:48:30.219466 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:48:30.219483 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:48:30.219496 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:48:30.219508 | orchestrator | 2026-04-17 05:48:30.219520 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-17 05:48:30.219533 | orchestrator | Friday 17 April 2026 05:48:16 +0000 (0:00:10.110) 0:00:47.848 ********** 2026-04-17 05:48:30.219544 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-17 05:48:30.219556 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-17 05:48:30.219621 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-17 05:48:30.219633 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-17 05:48:30.219644 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-17 05:48:30.219654 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-17 05:48:30.219665 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-17 05:48:30.219675 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-17 05:48:30.219686 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-17 05:48:30.219697 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-17 05:48:30.219707 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-17 05:48:30.219718 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-17 05:48:30.219729 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 05:48:30.219739 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 05:48:30.219750 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 05:48:30.219761 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 05:48:30.219771 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 05:48:30.219781 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 05:48:30.219792 | orchestrator | 2026-04-17 05:48:30.219803 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-17 05:48:30.219814 | orchestrator | Friday 17 April 2026 05:48:23 +0000 (0:00:06.661) 0:00:54.509 ********** 2026-04-17 05:48:30.219825 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-17 05:48:30.219836 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:48:30.219862 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-17 05:48:30.219873 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:48:30.219884 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-17 05:48:30.219895 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:48:30.219905 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-04-17 05:48:30.219916 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-04-17 05:48:30.219927 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-04-17 05:48:30.219937 | orchestrator | 2026-04-17 05:48:30.219948 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-17 05:48:30.219959 | orchestrator | Friday 17 April 2026 05:48:25 +0000 (0:00:02.462) 0:00:56.972 ********** 2026-04-17 05:48:30.219970 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-17 05:48:30.219981 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:48:30.219991 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-17 05:48:30.220002 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:48:30.220012 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-17 05:48:30.220023 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:48:30.220034 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-17 05:48:30.220052 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-17 05:48:30.220063 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-17 05:48:30.220074 | orchestrator | 2026-04-17 05:48:30.220085 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:48:30.220097 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 05:48:30.220109 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 05:48:30.220138 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 05:48:30.220149 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:48:30.220160 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:48:30.220171 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 05:48:30.220181 | orchestrator | 2026-04-17 05:48:30.220192 | orchestrator | 2026-04-17 05:48:30.220203 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:48:30.220213 | orchestrator | Friday 17 April 2026 05:48:29 +0000 (0:00:03.945) 0:01:00.917 ********** 2026-04-17 05:48:30.220224 | orchestrator | =============================================================================== 2026-04-17 05:48:30.220234 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.82s 2026-04-17 05:48:30.220267 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.11s 2026-04-17 05:48:30.220278 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.66s 2026-04-17 05:48:30.220288 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.95s 2026-04-17 05:48:30.220299 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.59s 2026-04-17 05:48:30.220310 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.46s 2026-04-17 05:48:30.220320 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.36s 2026-04-17 05:48:30.220331 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.13s 2026-04-17 05:48:30.220342 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.99s 2026-04-17 05:48:30.220352 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.87s 2026-04-17 05:48:30.220362 | orchestrator | module-load : Load modules ---------------------------------------------- 1.80s 2026-04-17 05:48:30.220373 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.65s 2026-04-17 05:48:30.220383 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.47s 2026-04-17 05:48:30.220394 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.47s 2026-04-17 05:48:30.220405 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.38s 2026-04-17 05:48:30.220415 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.35s 2026-04-17 05:48:30.220426 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.18s 2026-04-17 05:48:30.220436 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.17s 2026-04-17 05:48:30.220447 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.11s 2026-04-17 05:48:30.220457 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.74s 2026-04-17 05:48:30.501088 | orchestrator | + osism apply -a upgrade ovn 2026-04-17 05:48:31.951547 | orchestrator | 2026-04-17 05:48:31 | INFO  | Prepare task for execution of ovn. 2026-04-17 05:48:32.036960 | orchestrator | 2026-04-17 05:48:32 | INFO  | Task 58c1f779-8043-414a-8351-572f9b8e09fc (ovn) was prepared for execution. 2026-04-17 05:48:32.037051 | orchestrator | 2026-04-17 05:48:32 | INFO  | It takes a moment until task 58c1f779-8043-414a-8351-572f9b8e09fc (ovn) has been started and output is visible here. 2026-04-17 05:48:55.566149 | orchestrator | 2026-04-17 05:48:55.566457 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 05:48:55.566486 | orchestrator | 2026-04-17 05:48:55.566498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 05:48:55.566512 | orchestrator | Friday 17 April 2026 05:48:37 +0000 (0:00:01.962) 0:00:01.962 ********** 2026-04-17 05:48:55.566531 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:48:55.566550 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:48:55.566569 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:48:55.566589 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:48:55.566609 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:48:55.566629 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:48:55.566648 | orchestrator | 2026-04-17 05:48:55.566667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 05:48:55.566686 | orchestrator | Friday 17 April 2026 05:48:40 +0000 (0:00:02.884) 0:00:04.846 ********** 2026-04-17 05:48:55.566706 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-17 05:48:55.566727 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-17 05:48:55.566747 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-17 05:48:55.566762 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-17 05:48:55.566775 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-17 05:48:55.566788 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-17 05:48:55.566800 | orchestrator | 2026-04-17 05:48:55.566813 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-17 05:48:55.566825 | orchestrator | 2026-04-17 05:48:55.566838 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-17 05:48:55.566851 | orchestrator | Friday 17 April 2026 05:48:43 +0000 (0:00:03.246) 0:00:08.092 ********** 2026-04-17 05:48:55.566863 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:48:55.566877 | orchestrator | 2026-04-17 05:48:55.566890 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-17 05:48:55.566902 | orchestrator | Friday 17 April 2026 05:48:49 +0000 (0:00:05.529) 0:00:13.622 ********** 2026-04-17 05:48:55.566916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.566949 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.566961 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567001 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567013 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567060 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567072 | orchestrator | 2026-04-17 05:48:55.567084 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-17 05:48:55.567095 | orchestrator | Friday 17 April 2026 05:48:52 +0000 (0:00:02.836) 0:00:16.459 ********** 2026-04-17 05:48:55.567106 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567129 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567140 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567151 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567171 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567212 | orchestrator | 2026-04-17 05:48:55.567225 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-17 05:48:55.567235 | orchestrator | Friday 17 April 2026 05:48:54 +0000 (0:00:02.858) 0:00:19.318 ********** 2026-04-17 05:48:55.567247 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567264 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:48:55.567283 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099493 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099606 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099623 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099636 | orchestrator | 2026-04-17 05:49:05.099649 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-17 05:49:05.099662 | orchestrator | Friday 17 April 2026 05:48:57 +0000 (0:00:02.233) 0:00:21.551 ********** 2026-04-17 05:49:05.099674 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099740 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099755 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099767 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099792 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099825 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099838 | orchestrator | 2026-04-17 05:49:05.099850 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-17 05:49:05.099861 | orchestrator | Friday 17 April 2026 05:49:00 +0000 (0:00:03.209) 0:00:24.761 ********** 2026-04-17 05:49:05.099874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.099999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:49:05.100013 | orchestrator | 2026-04-17 05:49:05.100028 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-17 05:49:05.100042 | orchestrator | Friday 17 April 2026 05:49:03 +0000 (0:00:02.693) 0:00:27.455 ********** 2026-04-17 05:49:05.100055 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:49:05.100070 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:49:05.100083 | orchestrator | } 2026-04-17 05:49:05.100096 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:49:05.100109 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:49:05.100121 | orchestrator | } 2026-04-17 05:49:05.100134 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:49:05.100147 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:49:05.100186 | orchestrator | } 2026-04-17 05:49:05.100199 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 05:49:05.100212 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:49:05.100225 | orchestrator | } 2026-04-17 05:49:05.100242 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 05:49:05.100255 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:49:05.100267 | orchestrator | } 2026-04-17 05:49:05.100280 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 05:49:05.100292 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:49:05.100305 | orchestrator | } 2026-04-17 05:49:05.100317 | orchestrator | 2026-04-17 05:49:05.100331 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:49:05.100344 | orchestrator | Friday 17 April 2026 05:49:05 +0000 (0:00:01.985) 0:00:29.440 ********** 2026-04-17 05:49:05.100368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:49:28.221429 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:49:28.221552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:49:28.221594 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:49:28.221608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:49:28.221619 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:49:28.221630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:49:28.221641 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:49:28.221652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:49:28.221664 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:49:28.221675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:49:28.221686 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:49:28.221697 | orchestrator | 2026-04-17 05:49:28.221709 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-17 05:49:28.221720 | orchestrator | Friday 17 April 2026 05:49:07 +0000 (0:00:02.769) 0:00:32.210 ********** 2026-04-17 05:49:28.221731 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:49:28.221743 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:49:28.221754 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:49:28.221764 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:49:28.221775 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:49:28.221785 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:49:28.221796 | orchestrator | 2026-04-17 05:49:28.221807 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-17 05:49:28.221818 | orchestrator | Friday 17 April 2026 05:49:11 +0000 (0:00:03.815) 0:00:36.025 ********** 2026-04-17 05:49:28.221828 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-17 05:49:28.221840 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-17 05:49:28.221865 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-17 05:49:28.221877 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-17 05:49:28.221887 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-17 05:49:28.221906 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-17 05:49:28.221917 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 05:49:28.221928 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 05:49:28.221941 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 05:49:28.221954 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 05:49:28.221966 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 05:49:28.221998 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 05:49:28.222074 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-17 05:49:28.222090 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-17 05:49:28.222129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-17 05:49:28.222142 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-17 05:49:28.222155 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-17 05:49:28.222168 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-17 05:49:28.222181 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 05:49:28.222194 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 05:49:28.222206 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 05:49:28.222219 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 05:49:28.222231 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 05:49:28.222244 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 05:49:28.222256 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 05:49:28.222269 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 05:49:28.222281 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 05:49:28.222293 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 05:49:28.222306 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 05:49:28.222317 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 05:49:28.222328 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 05:49:28.222339 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 05:49:28.222350 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 05:49:28.222360 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 05:49:28.222371 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 05:49:28.222382 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 05:49:28.222400 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 05:49:28.222411 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 05:49:28.222422 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 05:49:28.222433 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 05:49:28.222444 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 05:49:28.222461 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 05:49:28.222520 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-17 05:49:28.222535 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-17 05:49:28.222545 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-17 05:49:28.222557 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-17 05:49:28.222568 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-17 05:49:28.222587 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-17 05:52:22.686637 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 05:52:22.687778 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 05:52:22.687835 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 05:52:22.687858 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 05:52:22.687876 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 05:52:22.687893 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 05:52:22.687913 | orchestrator | 2026-04-17 05:52:22.687933 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 05:52:22.687951 | orchestrator | Friday 17 April 2026 05:49:31 +0000 (0:00:20.011) 0:00:56.037 ********** 2026-04-17 05:52:22.687970 | orchestrator | 2026-04-17 05:52:22.687983 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 05:52:22.687994 | orchestrator | Friday 17 April 2026 05:49:32 +0000 (0:00:00.487) 0:00:56.524 ********** 2026-04-17 05:52:22.688005 | orchestrator | 2026-04-17 05:52:22.688016 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 05:52:22.688027 | orchestrator | Friday 17 April 2026 05:49:32 +0000 (0:00:00.471) 0:00:56.996 ********** 2026-04-17 05:52:22.688038 | orchestrator | 2026-04-17 05:52:22.688048 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 05:52:22.688059 | orchestrator | Friday 17 April 2026 05:49:33 +0000 (0:00:00.698) 0:00:57.695 ********** 2026-04-17 05:52:22.688069 | orchestrator | 2026-04-17 05:52:22.688080 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 05:52:22.688095 | orchestrator | Friday 17 April 2026 05:49:33 +0000 (0:00:00.439) 0:00:58.135 ********** 2026-04-17 05:52:22.688147 | orchestrator | 2026-04-17 05:52:22.688166 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 05:52:22.688182 | orchestrator | Friday 17 April 2026 05:49:34 +0000 (0:00:00.507) 0:00:58.642 ********** 2026-04-17 05:52:22.688202 | orchestrator | 2026-04-17 05:52:22.688220 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-17 05:52:22.688237 | orchestrator | Friday 17 April 2026 05:49:35 +0000 (0:00:00.930) 0:00:59.573 ********** 2026-04-17 05:52:22.688249 | orchestrator | 2026-04-17 05:52:22.688259 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-04-17 05:52:22.688272 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:52:22.688283 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:52:22.688294 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:52:22.688305 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:52:22.688315 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:52:22.688326 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:52:22.688336 | orchestrator | 2026-04-17 05:52:22.688347 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-17 05:52:22.688358 | orchestrator | 2026-04-17 05:52:22.688369 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 05:52:22.688379 | orchestrator | Friday 17 April 2026 05:51:47 +0000 (0:02:12.013) 0:03:11.586 ********** 2026-04-17 05:52:22.688390 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:52:22.688401 | orchestrator | 2026-04-17 05:52:22.688411 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 05:52:22.688422 | orchestrator | Friday 17 April 2026 05:51:48 +0000 (0:00:01.777) 0:03:13.364 ********** 2026-04-17 05:52:22.688433 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 05:52:22.688444 | orchestrator | 2026-04-17 05:52:22.688454 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-17 05:52:22.688465 | orchestrator | Friday 17 April 2026 05:51:50 +0000 (0:00:02.070) 0:03:15.434 ********** 2026-04-17 05:52:22.688475 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.688488 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.688499 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.688510 | orchestrator | 2026-04-17 05:52:22.688521 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-17 05:52:22.688546 | orchestrator | Friday 17 April 2026 05:51:52 +0000 (0:00:01.973) 0:03:17.408 ********** 2026-04-17 05:52:22.688557 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.688567 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.688578 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.688588 | orchestrator | 2026-04-17 05:52:22.688599 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-17 05:52:22.688612 | orchestrator | Friday 17 April 2026 05:51:54 +0000 (0:00:01.359) 0:03:18.768 ********** 2026-04-17 05:52:22.688628 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.688646 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.688663 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.688681 | orchestrator | 2026-04-17 05:52:22.688699 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-17 05:52:22.688711 | orchestrator | Friday 17 April 2026 05:51:55 +0000 (0:00:01.438) 0:03:20.206 ********** 2026-04-17 05:52:22.688721 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.688732 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.688767 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.688779 | orchestrator | 2026-04-17 05:52:22.688789 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-17 05:52:22.688800 | orchestrator | Friday 17 April 2026 05:51:57 +0000 (0:00:01.396) 0:03:21.603 ********** 2026-04-17 05:52:22.688815 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.688862 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.688880 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.688914 | orchestrator | 2026-04-17 05:52:22.688926 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-17 05:52:22.688937 | orchestrator | Friday 17 April 2026 05:51:58 +0000 (0:00:01.449) 0:03:23.053 ********** 2026-04-17 05:52:22.688948 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:52:22.688958 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:52:22.688969 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:52:22.688980 | orchestrator | 2026-04-17 05:52:22.688991 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-17 05:52:22.689001 | orchestrator | Friday 17 April 2026 05:52:00 +0000 (0:00:01.660) 0:03:24.713 ********** 2026-04-17 05:52:22.689012 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.689023 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.689033 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.689044 | orchestrator | 2026-04-17 05:52:22.689054 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-17 05:52:22.689065 | orchestrator | Friday 17 April 2026 05:52:02 +0000 (0:00:01.866) 0:03:26.579 ********** 2026-04-17 05:52:22.689076 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.689086 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.689097 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.689107 | orchestrator | 2026-04-17 05:52:22.689118 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-17 05:52:22.689128 | orchestrator | Friday 17 April 2026 05:52:03 +0000 (0:00:01.411) 0:03:27.991 ********** 2026-04-17 05:52:22.689139 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.689149 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.689159 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.689170 | orchestrator | 2026-04-17 05:52:22.689180 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-17 05:52:22.689191 | orchestrator | Friday 17 April 2026 05:52:05 +0000 (0:00:01.944) 0:03:29.935 ********** 2026-04-17 05:52:22.689201 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.689212 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.689222 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.689233 | orchestrator | 2026-04-17 05:52:22.689249 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-17 05:52:22.689271 | orchestrator | Friday 17 April 2026 05:52:07 +0000 (0:00:01.664) 0:03:31.600 ********** 2026-04-17 05:52:22.689297 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:52:22.689315 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:52:22.689333 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:52:22.689350 | orchestrator | 2026-04-17 05:52:22.689367 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-17 05:52:22.689383 | orchestrator | Friday 17 April 2026 05:52:08 +0000 (0:00:01.481) 0:03:33.082 ********** 2026-04-17 05:52:22.689399 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:52:22.689418 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:52:22.689436 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:52:22.689453 | orchestrator | 2026-04-17 05:52:22.689472 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-17 05:52:22.689490 | orchestrator | Friday 17 April 2026 05:52:10 +0000 (0:00:01.451) 0:03:34.533 ********** 2026-04-17 05:52:22.689508 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.689526 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.689545 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.689562 | orchestrator | 2026-04-17 05:52:22.689582 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-17 05:52:22.689599 | orchestrator | Friday 17 April 2026 05:52:12 +0000 (0:00:02.205) 0:03:36.739 ********** 2026-04-17 05:52:22.689618 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.689636 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.689655 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.689674 | orchestrator | 2026-04-17 05:52:22.689691 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-17 05:52:22.689721 | orchestrator | Friday 17 April 2026 05:52:13 +0000 (0:00:01.403) 0:03:38.143 ********** 2026-04-17 05:52:22.689732 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.689792 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.689804 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.689814 | orchestrator | 2026-04-17 05:52:22.689825 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-17 05:52:22.689836 | orchestrator | Friday 17 April 2026 05:52:15 +0000 (0:00:02.066) 0:03:40.209 ********** 2026-04-17 05:52:22.689846 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:52:22.689857 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:52:22.689867 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:52:22.689878 | orchestrator | 2026-04-17 05:52:22.689888 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-17 05:52:22.689899 | orchestrator | Friday 17 April 2026 05:52:17 +0000 (0:00:01.442) 0:03:41.652 ********** 2026-04-17 05:52:22.689909 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:52:22.689920 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:52:22.689930 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:52:22.689941 | orchestrator | 2026-04-17 05:52:22.689960 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 05:52:22.689972 | orchestrator | Friday 17 April 2026 05:52:18 +0000 (0:00:01.633) 0:03:43.285 ********** 2026-04-17 05:52:22.689982 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:52:22.689993 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:52:22.690003 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:52:22.690074 | orchestrator | 2026-04-17 05:52:22.690088 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-17 05:52:22.690099 | orchestrator | Friday 17 April 2026 05:52:20 +0000 (0:00:01.856) 0:03:45.142 ********** 2026-04-17 05:52:22.690163 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220387 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220492 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220509 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220547 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220581 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:29.220625 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:29.220663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:29.220683 | orchestrator | 2026-04-17 05:52:29.220696 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-17 05:52:29.220708 | orchestrator | Friday 17 April 2026 05:52:24 +0000 (0:00:04.131) 0:03:49.274 ********** 2026-04-17 05:52:29.220720 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220805 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220824 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220835 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:29.220855 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.868405 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.868554 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.868572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:45.868584 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.868595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:45.868622 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.868634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:45.868646 | orchestrator | 2026-04-17 05:52:45.868659 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-17 05:52:45.868671 | orchestrator | Friday 17 April 2026 05:52:31 +0000 (0:00:06.686) 0:03:55.961 ********** 2026-04-17 05:52:45.868683 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-17 05:52:45.868756 | orchestrator | 2026-04-17 05:52:45.868769 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-17 05:52:45.868780 | orchestrator | Friday 17 April 2026 05:52:33 +0000 (0:00:02.205) 0:03:58.166 ********** 2026-04-17 05:52:45.868791 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:52:45.868803 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:52:45.868831 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:52:45.868842 | orchestrator | 2026-04-17 05:52:45.868862 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-17 05:52:45.868873 | orchestrator | Friday 17 April 2026 05:52:35 +0000 (0:00:01.900) 0:04:00.067 ********** 2026-04-17 05:52:45.868884 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:52:45.868895 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:52:45.868905 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:52:45.868916 | orchestrator | 2026-04-17 05:52:45.868926 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-17 05:52:45.868938 | orchestrator | Friday 17 April 2026 05:52:38 +0000 (0:00:03.022) 0:04:03.090 ********** 2026-04-17 05:52:45.868951 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:52:45.868964 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:52:45.868976 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:52:45.868988 | orchestrator | 2026-04-17 05:52:45.868999 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-17 05:52:45.869012 | orchestrator | Friday 17 April 2026 05:52:41 +0000 (0:00:02.827) 0:04:05.917 ********** 2026-04-17 05:52:45.869025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.869040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.869053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.869067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.869080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.869093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:45.869120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:51.322615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.322804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:51.322923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.322946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:52:51.322963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.322976 | orchestrator | 2026-04-17 05:52:51.322989 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-17 05:52:51.323003 | orchestrator | Friday 17 April 2026 05:52:47 +0000 (0:00:05.837) 0:04:11.755 ********** 2026-04-17 05:52:51.323015 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:52:51.323027 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:52:51.323058 | orchestrator | } 2026-04-17 05:52:51.323069 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:52:51.323080 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:52:51.323093 | orchestrator | } 2026-04-17 05:52:51.323112 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:52:51.323129 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:52:51.323147 | orchestrator | } 2026-04-17 05:52:51.323165 | orchestrator | 2026-04-17 05:52:51.323183 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 05:52:51.323200 | orchestrator | Friday 17 April 2026 05:52:48 +0000 (0:00:01.507) 0:04:13.262 ********** 2026-04-17 05:52:51.323220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 05:52:51.323459 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 05:54:48.222013 | orchestrator | 2026-04-17 05:54:48.222191 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-17 05:54:48.222208 | orchestrator | Friday 17 April 2026 05:52:52 +0000 (0:00:03.801) 0:04:17.064 ********** 2026-04-17 05:54:48.222221 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-17 05:54:48.222234 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-17 05:54:48.222245 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-17 05:54:48.222257 | orchestrator | 2026-04-17 05:54:48.222268 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-17 05:54:48.222280 | orchestrator | Friday 17 April 2026 05:53:19 +0000 (0:00:26.932) 0:04:43.997 ********** 2026-04-17 05:54:48.222291 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 05:54:48.222302 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:54:48.222313 | orchestrator | } 2026-04-17 05:54:48.222324 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 05:54:48.222335 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:54:48.222346 | orchestrator | } 2026-04-17 05:54:48.222357 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 05:54:48.222368 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 05:54:48.222378 | orchestrator | } 2026-04-17 05:54:48.222389 | orchestrator | 2026-04-17 05:54:48.222401 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 05:54:48.222412 | orchestrator | Friday 17 April 2026 05:53:21 +0000 (0:00:01.697) 0:04:45.695 ********** 2026-04-17 05:54:48.222423 | orchestrator | 2026-04-17 05:54:48.222434 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 05:54:48.222469 | orchestrator | Friday 17 April 2026 05:53:21 +0000 (0:00:00.502) 0:04:46.197 ********** 2026-04-17 05:54:48.222481 | orchestrator | 2026-04-17 05:54:48.222521 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 05:54:48.222534 | orchestrator | Friday 17 April 2026 05:53:22 +0000 (0:00:00.470) 0:04:46.668 ********** 2026-04-17 05:54:48.222547 | orchestrator | 2026-04-17 05:54:48.222559 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-17 05:54:48.222572 | orchestrator | Friday 17 April 2026 05:53:23 +0000 (0:00:00.837) 0:04:47.505 ********** 2026-04-17 05:54:48.222585 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:54:48.222596 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:54:48.222607 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:54:48.222618 | orchestrator | 2026-04-17 05:54:48.222629 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-17 05:54:48.222655 | orchestrator | Friday 17 April 2026 05:53:40 +0000 (0:00:17.438) 0:05:04.943 ********** 2026-04-17 05:54:48.222667 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:54:48.222678 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:54:48.222688 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:54:48.222699 | orchestrator | 2026-04-17 05:54:48.222710 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-17 05:54:48.222721 | orchestrator | Friday 17 April 2026 05:53:58 +0000 (0:00:17.544) 0:05:22.488 ********** 2026-04-17 05:54:48.222732 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-17 05:54:48.222743 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-17 05:54:48.222754 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-17 05:54:48.222765 | orchestrator | 2026-04-17 05:54:48.222776 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-17 05:54:48.222786 | orchestrator | Friday 17 April 2026 05:54:09 +0000 (0:00:11.619) 0:05:34.107 ********** 2026-04-17 05:54:48.222797 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:54:48.222808 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:54:48.222819 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:54:48.222829 | orchestrator | 2026-04-17 05:54:48.222840 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-17 05:54:48.222851 | orchestrator | Friday 17 April 2026 05:54:27 +0000 (0:00:17.546) 0:05:51.653 ********** 2026-04-17 05:54:48.222862 | orchestrator | Pausing for 5 seconds 2026-04-17 05:54:48.222873 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:54:48.222885 | orchestrator | 2026-04-17 05:54:48.222895 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-17 05:54:48.222906 | orchestrator | Friday 17 April 2026 05:54:33 +0000 (0:00:06.236) 0:05:57.890 ********** 2026-04-17 05:54:48.222917 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:54:48.222928 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:54:48.222939 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:54:48.222949 | orchestrator | 2026-04-17 05:54:48.222960 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-17 05:54:48.222971 | orchestrator | Friday 17 April 2026 05:54:35 +0000 (0:00:01.915) 0:05:59.805 ********** 2026-04-17 05:54:48.222982 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:54:48.222993 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:54:48.223003 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:54:48.223014 | orchestrator | 2026-04-17 05:54:48.223024 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-17 05:54:48.223035 | orchestrator | Friday 17 April 2026 05:54:37 +0000 (0:00:01.768) 0:06:01.574 ********** 2026-04-17 05:54:48.223046 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:54:48.223057 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:54:48.223068 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:54:48.223078 | orchestrator | 2026-04-17 05:54:48.223089 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-17 05:54:48.223100 | orchestrator | Friday 17 April 2026 05:54:39 +0000 (0:00:01.903) 0:06:03.478 ********** 2026-04-17 05:54:48.223119 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:54:48.223130 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:54:48.223140 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:54:48.223151 | orchestrator | 2026-04-17 05:54:48.223162 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-17 05:54:48.223172 | orchestrator | Friday 17 April 2026 05:54:40 +0000 (0:00:01.866) 0:06:05.345 ********** 2026-04-17 05:54:48.223183 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:54:48.223193 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:54:48.223204 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:54:48.223215 | orchestrator | 2026-04-17 05:54:48.223226 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-17 05:54:48.223255 | orchestrator | Friday 17 April 2026 05:54:42 +0000 (0:00:01.828) 0:06:07.174 ********** 2026-04-17 05:54:48.223266 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:54:48.223277 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:54:48.223288 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:54:48.223299 | orchestrator | 2026-04-17 05:54:48.223309 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-17 05:54:48.223320 | orchestrator | Friday 17 April 2026 05:54:44 +0000 (0:00:02.194) 0:06:09.368 ********** 2026-04-17 05:54:48.223331 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-17 05:54:48.223341 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-17 05:54:48.223352 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-17 05:54:48.223363 | orchestrator | 2026-04-17 05:54:48.223374 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 05:54:48.223386 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 05:54:48.223398 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 05:54:48.223435 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-17 05:54:48.223446 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:54:48.223457 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:54:48.223467 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 05:54:48.223478 | orchestrator | 2026-04-17 05:54:48.223507 | orchestrator | 2026-04-17 05:54:48.223518 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 05:54:48.223529 | orchestrator | Friday 17 April 2026 05:54:47 +0000 (0:00:02.757) 0:06:12.126 ********** 2026-04-17 05:54:48.223545 | orchestrator | =============================================================================== 2026-04-17 05:54:48.223556 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 132.01s 2026-04-17 05:54:48.223567 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 26.93s 2026-04-17 05:54:48.223577 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.01s 2026-04-17 05:54:48.223588 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.55s 2026-04-17 05:54:48.223599 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 17.54s 2026-04-17 05:54:48.223609 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 17.44s 2026-04-17 05:54:48.223620 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 11.62s 2026-04-17 05:54:48.223631 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.69s 2026-04-17 05:54:48.223648 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.24s 2026-04-17 05:54:48.223659 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.84s 2026-04-17 05:54:48.223670 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 5.53s 2026-04-17 05:54:48.223680 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.13s 2026-04-17 05:54:48.223691 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.82s 2026-04-17 05:54:48.223701 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.80s 2026-04-17 05:54:48.223712 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.54s 2026-04-17 05:54:48.223722 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.25s 2026-04-17 05:54:48.223733 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.21s 2026-04-17 05:54:48.223744 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 3.02s 2026-04-17 05:54:48.223754 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.88s 2026-04-17 05:54:48.223765 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.86s 2026-04-17 05:54:48.441040 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-17 05:54:48.441139 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 05:54:48.441156 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-04-17 05:54:48.451044 | orchestrator | + set -e 2026-04-17 05:54:48.451105 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 05:54:48.451120 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 05:54:48.451133 | orchestrator | ++ INTERACTIVE=false 2026-04-17 05:54:48.451144 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 05:54:48.451155 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 05:54:48.451167 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-04-17 05:54:49.838221 | orchestrator | 2026-04-17 05:54:49 | INFO  | Prepare task for execution of ceph-rolling_update. 2026-04-17 05:54:49.907813 | orchestrator | 2026-04-17 05:54:49 | INFO  | Task 8790fddc-b901-4a25-82ad-9144309978cc (ceph-rolling_update) was prepared for execution. 2026-04-17 05:54:49.907910 | orchestrator | 2026-04-17 05:54:49 | INFO  | It takes a moment until task 8790fddc-b901-4a25-82ad-9144309978cc (ceph-rolling_update) has been started and output is visible here. 2026-04-17 05:55:53.019817 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 05:55:53.019932 | orchestrator | 2.16.14 2026-04-17 05:55:53.019948 | orchestrator | 2026-04-17 05:55:53.019961 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-04-17 05:55:53.019973 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 05:55:53.019985 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 05:55:53.020008 | orchestrator | 2026-04-17 05:55:53.020033 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-04-17 05:55:53.020045 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 05:55:53.020055 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 05:55:53.020077 | orchestrator | Friday 17 April 2026 05:54:58 +0000 (0:00:01.484) 0:00:01.484 ********** 2026-04-17 05:55:53.020088 | orchestrator | skipping: [localhost] 2026-04-17 05:55:53.020099 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-04-17 05:55:53.020110 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-04-17 05:55:53.020121 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-04-17 05:55:53.020156 | orchestrator | 2026-04-17 05:55:53.020168 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-04-17 05:55:53.020178 | orchestrator | 2026-04-17 05:55:53.020189 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-04-17 05:55:53.020200 | orchestrator | Friday 17 April 2026 05:55:00 +0000 (0:00:01.350) 0:00:02.834 ********** 2026-04-17 05:55:53.020210 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 05:55:53.020221 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-17 05:55:53.020232 | orchestrator | } 2026-04-17 05:55:53.020243 | orchestrator | ok: [testbed-node-1] => { 2026-04-17 05:55:53.020254 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-17 05:55:53.020264 | orchestrator | } 2026-04-17 05:55:53.020291 | orchestrator | ok: [testbed-node-2] => { 2026-04-17 05:55:53.020302 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-17 05:55:53.020313 | orchestrator | } 2026-04-17 05:55:53.020324 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 05:55:53.020334 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-17 05:55:53.020345 | orchestrator | } 2026-04-17 05:55:53.020356 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 05:55:53.020366 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-17 05:55:53.020377 | orchestrator | } 2026-04-17 05:55:53.020423 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 05:55:53.020434 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-17 05:55:53.020445 | orchestrator | } 2026-04-17 05:55:53.020456 | orchestrator | ok: [testbed-manager] => { 2026-04-17 05:55:53.020467 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-17 05:55:53.020477 | orchestrator | } 2026-04-17 05:55:53.020488 | orchestrator | 2026-04-17 05:55:53.020499 | orchestrator | TASK [Gather facts] ************************************************************ 2026-04-17 05:55:53.020509 | orchestrator | Friday 17 April 2026 05:55:02 +0000 (0:00:02.203) 0:00:05.038 ********** 2026-04-17 05:55:53.020520 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:55:53.020530 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:55:53.020541 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:55:53.020552 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:55:53.020562 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:55:53.020573 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:55:53.020583 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.020594 | orchestrator | 2026-04-17 05:55:53.020605 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-04-17 05:55:53.020616 | orchestrator | Friday 17 April 2026 05:55:08 +0000 (0:00:06.249) 0:00:11.287 ********** 2026-04-17 05:55:53.020626 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 05:55:53.020637 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:55:53.020648 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 05:55:53.020659 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 05:55:53.020669 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 05:55:53.020680 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:55:53.020690 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:55:53.020701 | orchestrator | 2026-04-17 05:55:53.020712 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-04-17 05:55:53.020722 | orchestrator | Friday 17 April 2026 05:55:38 +0000 (0:00:30.144) 0:00:41.431 ********** 2026-04-17 05:55:53.020733 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.020744 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.020755 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.020774 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.020785 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.020796 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.020806 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.020817 | orchestrator | 2026-04-17 05:55:53.020827 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 05:55:53.020838 | orchestrator | Friday 17 April 2026 05:55:39 +0000 (0:00:01.099) 0:00:42.530 ********** 2026-04-17 05:55:53.020866 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-17 05:55:53.020880 | orchestrator | 2026-04-17 05:55:53.020891 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 05:55:53.020902 | orchestrator | Friday 17 April 2026 05:55:41 +0000 (0:00:02.078) 0:00:44.609 ********** 2026-04-17 05:55:53.020913 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.020924 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.020935 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.020945 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.020956 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.020967 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.020978 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.020988 | orchestrator | 2026-04-17 05:55:53.020999 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 05:55:53.021010 | orchestrator | Friday 17 April 2026 05:55:43 +0000 (0:00:01.532) 0:00:46.141 ********** 2026-04-17 05:55:53.021020 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.021031 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.021042 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.021052 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.021063 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.021074 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.021085 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.021095 | orchestrator | 2026-04-17 05:55:53.021106 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 05:55:53.021117 | orchestrator | Friday 17 April 2026 05:55:44 +0000 (0:00:00.799) 0:00:46.941 ********** 2026-04-17 05:55:53.021128 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.021138 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.021149 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.021160 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.021170 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.021181 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.021192 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.021203 | orchestrator | 2026-04-17 05:55:53.021214 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 05:55:53.021224 | orchestrator | Friday 17 April 2026 05:55:45 +0000 (0:00:01.556) 0:00:48.498 ********** 2026-04-17 05:55:53.021235 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.021246 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.021257 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.021267 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.021278 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.021289 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.021300 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.021311 | orchestrator | 2026-04-17 05:55:53.021327 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 05:55:53.021338 | orchestrator | Friday 17 April 2026 05:55:46 +0000 (0:00:00.833) 0:00:49.332 ********** 2026-04-17 05:55:53.021349 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.021360 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.021371 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.021399 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.021410 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.021421 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.021432 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.021450 | orchestrator | 2026-04-17 05:55:53.021461 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 05:55:53.021471 | orchestrator | Friday 17 April 2026 05:55:47 +0000 (0:00:01.077) 0:00:50.409 ********** 2026-04-17 05:55:53.021482 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.021493 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.021504 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.021514 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.021525 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.021536 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.021546 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.021557 | orchestrator | 2026-04-17 05:55:53.021568 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 05:55:53.021579 | orchestrator | Friday 17 April 2026 05:55:48 +0000 (0:00:00.912) 0:00:51.322 ********** 2026-04-17 05:55:53.021590 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:55:53.021600 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:55:53.021611 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:55:53.021622 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:55:53.021633 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:55:53.021643 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:55:53.021654 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:55:53.021665 | orchestrator | 2026-04-17 05:55:53.021676 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 05:55:53.021686 | orchestrator | Friday 17 April 2026 05:55:49 +0000 (0:00:01.153) 0:00:52.475 ********** 2026-04-17 05:55:53.021697 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.021708 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.021719 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.021729 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.021740 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.021751 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.021762 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.021772 | orchestrator | 2026-04-17 05:55:53.021783 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 05:55:53.021794 | orchestrator | Friday 17 April 2026 05:55:50 +0000 (0:00:00.875) 0:00:53.350 ********** 2026-04-17 05:55:53.021805 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:55:53.021816 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:55:53.021826 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:55:53.021837 | orchestrator | 2026-04-17 05:55:53.021848 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 05:55:53.021859 | orchestrator | Friday 17 April 2026 05:55:51 +0000 (0:00:01.373) 0:00:54.724 ********** 2026-04-17 05:55:53.021870 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:55:53.021881 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:55:53.021891 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:55:53.021902 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:55:53.021913 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:55:53.021923 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:55:53.021934 | orchestrator | ok: [testbed-manager] 2026-04-17 05:55:53.021945 | orchestrator | 2026-04-17 05:55:53.021956 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 05:55:53.021973 | orchestrator | Friday 17 April 2026 05:55:53 +0000 (0:00:01.031) 0:00:55.755 ********** 2026-04-17 05:56:04.700081 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:56:04.700225 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:56:04.700255 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:56:04.700275 | orchestrator | 2026-04-17 05:56:04.700294 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 05:56:04.700314 | orchestrator | Friday 17 April 2026 05:55:55 +0000 (0:00:02.298) 0:00:58.053 ********** 2026-04-17 05:56:04.700361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 05:56:04.700462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 05:56:04.700474 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 05:56:04.700485 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:04.700496 | orchestrator | 2026-04-17 05:56:04.700507 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 05:56:04.700518 | orchestrator | Friday 17 April 2026 05:55:55 +0000 (0:00:00.449) 0:00:58.503 ********** 2026-04-17 05:56:04.700531 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 05:56:04.700545 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 05:56:04.700556 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 05:56:04.700582 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:04.700596 | orchestrator | 2026-04-17 05:56:04.700609 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 05:56:04.700622 | orchestrator | Friday 17 April 2026 05:55:56 +0000 (0:00:00.940) 0:00:59.443 ********** 2026-04-17 05:56:04.700637 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:04.700654 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:04.700667 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:04.700679 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:04.700693 | orchestrator | 2026-04-17 05:56:04.700705 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 05:56:04.700718 | orchestrator | Friday 17 April 2026 05:55:56 +0000 (0:00:00.192) 0:00:59.636 ********** 2026-04-17 05:56:04.700734 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'aa031f9a4b08', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 05:55:53.698689', 'end': '2026-04-17 05:55:53.743982', 'delta': '0:00:00.045293', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['aa031f9a4b08'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 05:56:04.700783 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9f8a3fd74f0b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 05:55:54.301066', 'end': '2026-04-17 05:55:54.345469', 'delta': '0:00:00.044403', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9f8a3fd74f0b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 05:56:04.700799 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f2e2f728469b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 05:55:55.124037', 'end': '2026-04-17 05:55:55.170580', 'delta': '0:00:00.046543', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2e2f728469b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 05:56:04.700813 | orchestrator | 2026-04-17 05:56:04.700826 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 05:56:04.700839 | orchestrator | Friday 17 April 2026 05:55:57 +0000 (0:00:00.231) 0:00:59.867 ********** 2026-04-17 05:56:04.700857 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:56:04.700870 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:56:04.700883 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:56:04.700895 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:56:04.700907 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:56:04.700919 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:56:04.700932 | orchestrator | ok: [testbed-manager] 2026-04-17 05:56:04.700945 | orchestrator | 2026-04-17 05:56:04.700958 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 05:56:04.700970 | orchestrator | Friday 17 April 2026 05:55:58 +0000 (0:00:01.476) 0:01:01.344 ********** 2026-04-17 05:56:04.700981 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:04.700991 | orchestrator | 2026-04-17 05:56:04.701002 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 05:56:04.701013 | orchestrator | Friday 17 April 2026 05:55:58 +0000 (0:00:00.281) 0:01:01.625 ********** 2026-04-17 05:56:04.701023 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:56:04.701034 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:56:04.701045 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:56:04.701055 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:56:04.701066 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:56:04.701077 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:56:04.701087 | orchestrator | ok: [testbed-manager] 2026-04-17 05:56:04.701098 | orchestrator | 2026-04-17 05:56:04.701109 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 05:56:04.701120 | orchestrator | Friday 17 April 2026 05:56:00 +0000 (0:00:01.279) 0:01:02.905 ********** 2026-04-17 05:56:04.701131 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:56:04.701142 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-17 05:56:04.701153 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 05:56:04.701164 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 05:56:04.701175 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 05:56:04.701192 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 05:56:04.701203 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-17 05:56:04.701213 | orchestrator | 2026-04-17 05:56:04.701224 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 05:56:04.701235 | orchestrator | Friday 17 April 2026 05:56:02 +0000 (0:00:02.283) 0:01:05.188 ********** 2026-04-17 05:56:04.701246 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:56:04.701257 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:56:04.701267 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:56:04.701278 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:56:04.701288 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:56:04.701299 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:56:04.701310 | orchestrator | ok: [testbed-manager] 2026-04-17 05:56:04.701321 | orchestrator | 2026-04-17 05:56:04.701332 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 05:56:04.701343 | orchestrator | Friday 17 April 2026 05:56:03 +0000 (0:00:01.184) 0:01:06.373 ********** 2026-04-17 05:56:04.701358 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:04.701402 | orchestrator | 2026-04-17 05:56:04.701421 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 05:56:04.701439 | orchestrator | Friday 17 April 2026 05:56:03 +0000 (0:00:00.147) 0:01:06.520 ********** 2026-04-17 05:56:04.701456 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:04.701474 | orchestrator | 2026-04-17 05:56:04.701490 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 05:56:04.701507 | orchestrator | Friday 17 April 2026 05:56:04 +0000 (0:00:00.268) 0:01:06.789 ********** 2026-04-17 05:56:04.701526 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:04.701546 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:04.701565 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:04.701583 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:04.701598 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:04.701619 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:11.608907 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:11.608994 | orchestrator | 2026-04-17 05:56:11.609003 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 05:56:11.609011 | orchestrator | Friday 17 April 2026 05:56:05 +0000 (0:00:01.160) 0:01:07.949 ********** 2026-04-17 05:56:11.609017 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:11.609024 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:11.609030 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:11.609036 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:11.609042 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:11.609048 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:11.609054 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:11.609060 | orchestrator | 2026-04-17 05:56:11.609067 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 05:56:11.609074 | orchestrator | Friday 17 April 2026 05:56:06 +0000 (0:00:01.169) 0:01:09.118 ********** 2026-04-17 05:56:11.609080 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:11.609086 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:11.609092 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:11.609098 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:11.609104 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:11.609110 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:11.609118 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:11.609129 | orchestrator | 2026-04-17 05:56:11.609139 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 05:56:11.609149 | orchestrator | Friday 17 April 2026 05:56:07 +0000 (0:00:01.101) 0:01:10.219 ********** 2026-04-17 05:56:11.609159 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:11.609169 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:11.609179 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:11.609208 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:11.609215 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:11.609222 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:11.609228 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:11.609234 | orchestrator | 2026-04-17 05:56:11.609240 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 05:56:11.609246 | orchestrator | Friday 17 April 2026 05:56:08 +0000 (0:00:00.819) 0:01:11.039 ********** 2026-04-17 05:56:11.609252 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:11.609270 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:11.609276 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:11.609282 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:11.609288 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:11.609294 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:11.609300 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:11.609306 | orchestrator | 2026-04-17 05:56:11.609312 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 05:56:11.609318 | orchestrator | Friday 17 April 2026 05:56:09 +0000 (0:00:01.142) 0:01:12.182 ********** 2026-04-17 05:56:11.609324 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:11.609330 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:11.609336 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:11.609342 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:11.609348 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:11.609387 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:11.609394 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:11.609400 | orchestrator | 2026-04-17 05:56:11.609406 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 05:56:11.609413 | orchestrator | Friday 17 April 2026 05:56:10 +0000 (0:00:00.823) 0:01:13.005 ********** 2026-04-17 05:56:11.609419 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:11.609425 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:11.609439 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:11.609445 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:11.609452 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:11.609466 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:11.609472 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:11.609478 | orchestrator | 2026-04-17 05:56:11.609484 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 05:56:11.609491 | orchestrator | Friday 17 April 2026 05:56:11 +0000 (0:00:01.145) 0:01:14.150 ********** 2026-04-17 05:56:11.609499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.609509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.609516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.609537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 05:56:11.609552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.609559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.609569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.609578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1d6df01d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:11.609592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.874754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.874883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.874931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.874954 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:11.874978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.874999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 05:56:11.875019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.875031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.875042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.875110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41525a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:11.875126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.875138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.875149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.875160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.875181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:11.875201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-36-58-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 05:56:12.102174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.102279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.102310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.102323 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:12.102341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60cf27b4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:12.102420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.102453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.102465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.102484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'uuids': ['7b3e98f1-7f68-4c04-9bb1-a0fd9b3252da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p']}})  2026-04-17 05:56:12.102498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c054ea69', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:12.102511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2']}})  2026-04-17 05:56:12.102531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.102543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.102554 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:12.102574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 05:56:12.225052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.225161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px', 'dm-uuid-CRYPT-LUKS2-0eb8d7ab97d34aa3a4f06ee9564e4391-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 05:56:12.225178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.225191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'uuids': ['0eb8d7ab-97d3-4aa3-a4f0-6ee9564e4391'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px']}})  2026-04-17 05:56:12.225204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08']}})  2026-04-17 05:56:12.225244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.225257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.225286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'uuids': ['0c9a4a4e-baea-4a48-b886-e6edd30675e6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB']}})  2026-04-17 05:56:12.225335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fc59f804', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:12.225450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdcd9064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:12.225466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.225488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd']}})  2026-04-17 05:56:12.392185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.392298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.392315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p', 'dm-uuid-CRYPT-LUKS2-7b3e98f17f684c049bb1a0fd9b3252da-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 05:56:12.392329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.392385 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:12.392400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 05:56:12.392412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.392423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM', 'dm-uuid-CRYPT-LUKS2-23d95080c3d748658de3cafbcbf22080-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 05:56:12.392435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.392464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'uuids': ['23d95080-c3d7-4865-8de3-cafbcbf22080'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM']}})  2026-04-17 05:56:12.392483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0']}})  2026-04-17 05:56:12.392495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.392518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '11ed6889', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:12.392540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB', 'dm-uuid-CRYPT-LUKS2-0c9a4a4ebaea4a48b886e6edd30675e6-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'uuids': ['7145b7e9-237d-4eff-af62-82cfb643a183'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS']}})  2026-04-17 05:56:12.543659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ab95973', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:12.543671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f']}})  2026-04-17 05:56:12.543682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543694 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:12.543763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 05:56:12.543800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ', 'dm-uuid-CRYPT-LUKS2-9b48552cb2fb461da2ba0698b00ea049-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.543835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'uuids': ['9b48552c-b2fb-461d-a2ba-0698b00ea049'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ']}})  2026-04-17 05:56:12.543847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d']}})  2026-04-17 05:56:12.543868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.620794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b9d69c97', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:12.620913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.620930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.620944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS', 'dm-uuid-CRYPT-LUKS2-7145b7e9237d4effaf6282cfb643a183-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 05:56:12.620958 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:12.620972 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.621006 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.621026 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.621038 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 05:56:12.621050 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.621061 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.621072 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.621108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '510ba09c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part16', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part14', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part15', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part1', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:56:12.893143 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.893274 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:56:12.893292 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:12.893306 | orchestrator | 2026-04-17 05:56:12.893318 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 05:56:12.893330 | orchestrator | Friday 17 April 2026 05:56:12 +0000 (0:00:01.335) 0:01:15.486 ********** 2026-04-17 05:56:12.893344 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:12.893400 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:12.893414 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:12.893444 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:12.893498 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:12.893511 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:12.893523 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:12.893544 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1d6df01d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:12.893574 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380113 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380214 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380229 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380241 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380270 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380306 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380335 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380348 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380403 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41525a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380426 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.380446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756183 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:13.756286 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756303 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756315 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756403 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-36-58-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756434 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756445 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756474 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756510 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60cf27b4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756529 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756539 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.756549 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:13.756566 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891140 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:13.891245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'uuids': ['7b3e98f1-7f68-4c04-9bb1-a0fd9b3252da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c054ea69', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891344 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891443 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px', 'dm-uuid-CRYPT-LUKS2-0eb8d7ab97d34aa3a4f06ee9564e4391-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891505 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.891523 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'uuids': ['0c9a4a4e-baea-4a48-b886-e6edd30675e6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.973946 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'uuids': ['0eb8d7ab-97d3-4aa3-a4f0-6ee9564e4391'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.974077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdcd9064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.974104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.974120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.974147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.974166 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fc59f804', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.974189 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.974201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:13.974220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p', 'dm-uuid-CRYPT-LUKS2-7b3e98f17f684c049bb1a0fd9b3252da-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM', 'dm-uuid-CRYPT-LUKS2-23d95080c3d748658de3cafbcbf22080-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'uuids': ['23d95080-c3d7-4865-8de3-cafbcbf22080'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061593 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.061626 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '11ed6889', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.173982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174196 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174212 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'uuids': ['7145b7e9-237d-4eff-af62-82cfb643a183'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174321 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ab95973', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB', 'dm-uuid-CRYPT-LUKS2-0c9a4a4ebaea4a48b886e6edd30675e6-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174440 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174454 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174475 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.174512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ', 'dm-uuid-CRYPT-LUKS2-9b48552cb2fb461da2ba0698b00ea049-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296342 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'uuids': ['9b48552c-b2fb-461d-a2ba-0698b00ea049'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296429 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d']}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296469 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b9d69c97', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296527 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:14.296542 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296574 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:14.296586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS', 'dm-uuid-CRYPT-LUKS2-7145b7e9237d4effaf6282cfb643a183-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296598 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:14.296610 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:14.296629 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435651 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435764 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435807 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435822 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435835 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435880 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '510ba09c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part16', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part14', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part15', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part1', 'scsi-SQEMU_QEMU_HARDDISK_510ba09c-6639-45c5-b5d5-17f7dd37831d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435908 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435921 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:56:18.435935 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:18.435949 | orchestrator | 2026-04-17 05:56:18.435962 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 05:56:18.435976 | orchestrator | Friday 17 April 2026 05:56:14 +0000 (0:00:01.729) 0:01:17.216 ********** 2026-04-17 05:56:18.435988 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:56:18.436000 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:56:18.436013 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:56:18.436026 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:56:18.436037 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:56:18.436049 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:56:18.436061 | orchestrator | ok: [testbed-manager] 2026-04-17 05:56:18.436072 | orchestrator | 2026-04-17 05:56:18.436084 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 05:56:18.436096 | orchestrator | Friday 17 April 2026 05:56:16 +0000 (0:00:01.609) 0:01:18.825 ********** 2026-04-17 05:56:18.436107 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:56:18.436120 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:56:18.436148 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:56:18.436172 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:56:18.436184 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:56:18.436197 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:56:18.436209 | orchestrator | ok: [testbed-manager] 2026-04-17 05:56:18.436222 | orchestrator | 2026-04-17 05:56:18.436234 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 05:56:18.436247 | orchestrator | Friday 17 April 2026 05:56:16 +0000 (0:00:00.836) 0:01:19.662 ********** 2026-04-17 05:56:18.436259 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:56:18.436271 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:56:18.436283 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:56:18.436296 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:56:18.436308 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:56:18.436320 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:18.436332 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:56:18.436365 | orchestrator | 2026-04-17 05:56:18.436378 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 05:56:18.436400 | orchestrator | Friday 17 April 2026 05:56:18 +0000 (0:00:01.513) 0:01:21.175 ********** 2026-04-17 05:56:31.640453 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:31.640568 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:31.640618 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:31.640629 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.640639 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:31.640649 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:31.640658 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:31.640668 | orchestrator | 2026-04-17 05:56:31.640679 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 05:56:31.640689 | orchestrator | Friday 17 April 2026 05:56:19 +0000 (0:00:00.787) 0:01:21.962 ********** 2026-04-17 05:56:31.640699 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:31.640709 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:31.640718 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:31.640728 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.640737 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:31.640747 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:31.640757 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-04-17 05:56:31.640767 | orchestrator | 2026-04-17 05:56:31.640777 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 05:56:31.640787 | orchestrator | Friday 17 April 2026 05:56:20 +0000 (0:00:01.678) 0:01:23.641 ********** 2026-04-17 05:56:31.640797 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:31.640807 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:31.640816 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:31.640826 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.640836 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:31.640845 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:31.640855 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:31.640864 | orchestrator | 2026-04-17 05:56:31.640874 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 05:56:31.640884 | orchestrator | Friday 17 April 2026 05:56:21 +0000 (0:00:00.844) 0:01:24.486 ********** 2026-04-17 05:56:31.640894 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:56:31.640904 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-17 05:56:31.640914 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 05:56:31.640923 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-17 05:56:31.640934 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 05:56:31.640945 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 05:56:31.640957 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-17 05:56:31.640968 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-17 05:56:31.640979 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-17 05:56:31.640990 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 05:56:31.641001 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-17 05:56:31.641012 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-17 05:56:31.641023 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-17 05:56:31.641033 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-17 05:56:31.641044 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-17 05:56:31.641056 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-17 05:56:31.641067 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-17 05:56:31.641078 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-17 05:56:31.641089 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-17 05:56:31.641100 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-17 05:56:31.641111 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-17 05:56:31.641122 | orchestrator | 2026-04-17 05:56:31.641133 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 05:56:31.641145 | orchestrator | Friday 17 April 2026 05:56:23 +0000 (0:00:02.116) 0:01:26.602 ********** 2026-04-17 05:56:31.641163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 05:56:31.641175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 05:56:31.641186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 05:56:31.641197 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:31.641208 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 05:56:31.641219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 05:56:31.641229 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 05:56:31.641240 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:31.641251 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 05:56:31.641262 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 05:56:31.641273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 05:56:31.641284 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:31.641293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 05:56:31.641303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 05:56:31.641312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 05:56:31.641322 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.641353 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 05:56:31.641362 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 05:56:31.641372 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 05:56:31.641381 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:31.641391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 05:56:31.641400 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 05:56:31.641410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 05:56:31.641419 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:31.641447 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-17 05:56:31.641463 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-17 05:56:31.641473 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-17 05:56:31.641483 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:31.641492 | orchestrator | 2026-04-17 05:56:31.641502 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 05:56:31.641512 | orchestrator | Friday 17 April 2026 05:56:24 +0000 (0:00:00.962) 0:01:27.565 ********** 2026-04-17 05:56:31.641522 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:56:31.641531 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:56:31.641541 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:56:31.641551 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:56:31.641561 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:56:31.641571 | orchestrator | 2026-04-17 05:56:31.641581 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 05:56:31.641592 | orchestrator | Friday 17 April 2026 05:56:26 +0000 (0:00:01.428) 0:01:28.994 ********** 2026-04-17 05:56:31.641602 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.641612 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:31.641621 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:31.641631 | orchestrator | 2026-04-17 05:56:31.641641 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 05:56:31.641651 | orchestrator | Friday 17 April 2026 05:56:26 +0000 (0:00:00.338) 0:01:29.332 ********** 2026-04-17 05:56:31.641660 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.641672 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:31.641689 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:31.641715 | orchestrator | 2026-04-17 05:56:31.641731 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 05:56:31.641746 | orchestrator | Friday 17 April 2026 05:56:27 +0000 (0:00:00.705) 0:01:30.038 ********** 2026-04-17 05:56:31.641763 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.641777 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:56:31.641792 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:56:31.641808 | orchestrator | 2026-04-17 05:56:31.641823 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 05:56:31.641837 | orchestrator | Friday 17 April 2026 05:56:27 +0000 (0:00:00.364) 0:01:30.402 ********** 2026-04-17 05:56:31.641851 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:56:31.641867 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:56:31.641883 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:56:31.641899 | orchestrator | 2026-04-17 05:56:31.641915 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 05:56:31.641929 | orchestrator | Friday 17 April 2026 05:56:28 +0000 (0:00:00.439) 0:01:30.842 ********** 2026-04-17 05:56:31.641944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 05:56:31.641960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 05:56:31.641974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 05:56:31.641990 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.642005 | orchestrator | 2026-04-17 05:56:31.642096 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 05:56:31.642120 | orchestrator | Friday 17 April 2026 05:56:28 +0000 (0:00:00.458) 0:01:31.301 ********** 2026-04-17 05:56:31.642136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 05:56:31.642194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 05:56:31.642214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 05:56:31.642230 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.642247 | orchestrator | 2026-04-17 05:56:31.642262 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 05:56:31.642278 | orchestrator | Friday 17 April 2026 05:56:28 +0000 (0:00:00.420) 0:01:31.722 ********** 2026-04-17 05:56:31.642293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 05:56:31.642308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 05:56:31.642321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 05:56:31.642367 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:56:31.642381 | orchestrator | 2026-04-17 05:56:31.642397 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 05:56:31.642412 | orchestrator | Friday 17 April 2026 05:56:29 +0000 (0:00:00.750) 0:01:32.472 ********** 2026-04-17 05:56:31.642427 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:56:31.642443 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:56:31.642459 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:56:31.642474 | orchestrator | 2026-04-17 05:56:31.642490 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 05:56:31.642506 | orchestrator | Friday 17 April 2026 05:56:30 +0000 (0:00:00.777) 0:01:33.249 ********** 2026-04-17 05:56:31.642520 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 05:56:31.642536 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 05:56:31.642551 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 05:56:31.642566 | orchestrator | 2026-04-17 05:56:31.642581 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 05:56:31.642597 | orchestrator | Friday 17 April 2026 05:56:31 +0000 (0:00:00.600) 0:01:33.850 ********** 2026-04-17 05:56:31.642613 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:56:31.642630 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:56:31.642645 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:56:31.642678 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 05:56:31.642714 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 05:57:04.222352 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 05:57:04.222471 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 05:57:04.222487 | orchestrator | 2026-04-17 05:57:04.222500 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 05:57:04.222512 | orchestrator | Friday 17 April 2026 05:56:31 +0000 (0:00:00.828) 0:01:34.679 ********** 2026-04-17 05:57:04.222523 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:57:04.222535 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:57:04.222546 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:57:04.222556 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 05:57:04.222567 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 05:57:04.222577 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 05:57:04.222588 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 05:57:04.222599 | orchestrator | 2026-04-17 05:57:04.222610 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-04-17 05:57:04.222621 | orchestrator | Friday 17 April 2026 05:56:34 +0000 (0:00:02.602) 0:01:37.281 ********** 2026-04-17 05:57:04.222632 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:57:04.222643 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:57:04.222654 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:57:04.222665 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:57:04.222675 | orchestrator | changed: [testbed-manager] 2026-04-17 05:57:04.222686 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:57:04.222696 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:57:04.222707 | orchestrator | 2026-04-17 05:57:04.222718 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-04-17 05:57:04.222729 | orchestrator | Friday 17 April 2026 05:56:44 +0000 (0:00:10.273) 0:01:47.555 ********** 2026-04-17 05:57:04.222740 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.222751 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.222762 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.222772 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.222783 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.222794 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.222804 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.222815 | orchestrator | 2026-04-17 05:57:04.222826 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-04-17 05:57:04.222837 | orchestrator | Friday 17 April 2026 05:56:46 +0000 (0:00:01.263) 0:01:48.818 ********** 2026-04-17 05:57:04.222850 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.222862 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.222875 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.222888 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.222900 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.222912 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.222925 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.222937 | orchestrator | 2026-04-17 05:57:04.222949 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-04-17 05:57:04.222962 | orchestrator | Friday 17 April 2026 05:56:46 +0000 (0:00:00.805) 0:01:49.624 ********** 2026-04-17 05:57:04.222974 | orchestrator | changed: [testbed-node-2] 2026-04-17 05:57:04.222986 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.223022 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:57:04.223035 | orchestrator | changed: [testbed-node-1] 2026-04-17 05:57:04.223047 | orchestrator | changed: [testbed-node-3] 2026-04-17 05:57:04.223059 | orchestrator | changed: [testbed-node-4] 2026-04-17 05:57:04.223071 | orchestrator | changed: [testbed-node-5] 2026-04-17 05:57:04.223084 | orchestrator | 2026-04-17 05:57:04.223097 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-04-17 05:57:04.223109 | orchestrator | Friday 17 April 2026 05:56:49 +0000 (0:00:02.486) 0:01:52.110 ********** 2026-04-17 05:57:04.223123 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-17 05:57:04.223136 | orchestrator | 2026-04-17 05:57:04.223148 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-04-17 05:57:04.223161 | orchestrator | Friday 17 April 2026 05:56:51 +0000 (0:00:02.050) 0:01:54.161 ********** 2026-04-17 05:57:04.223174 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.223187 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.223199 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.223210 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.223221 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.223231 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.223242 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.223253 | orchestrator | 2026-04-17 05:57:04.223264 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-04-17 05:57:04.223312 | orchestrator | Friday 17 April 2026 05:56:52 +0000 (0:00:01.177) 0:01:55.338 ********** 2026-04-17 05:57:04.223335 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.223354 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.223373 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.223385 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.223395 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.223406 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.223417 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.223427 | orchestrator | 2026-04-17 05:57:04.223438 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-04-17 05:57:04.223449 | orchestrator | Friday 17 April 2026 05:56:53 +0000 (0:00:01.272) 0:01:56.611 ********** 2026-04-17 05:57:04.223459 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.223488 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.223499 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.223510 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.223528 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.223539 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.223550 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.223560 | orchestrator | 2026-04-17 05:57:04.223571 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-04-17 05:57:04.223582 | orchestrator | Friday 17 April 2026 05:56:54 +0000 (0:00:00.871) 0:01:57.482 ********** 2026-04-17 05:57:04.223593 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.223603 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.223614 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.223624 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.223635 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.223646 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.223656 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.223667 | orchestrator | 2026-04-17 05:57:04.223678 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-04-17 05:57:04.223688 | orchestrator | Friday 17 April 2026 05:56:56 +0000 (0:00:01.265) 0:01:58.748 ********** 2026-04-17 05:57:04.223699 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.223710 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.223730 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.223740 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.223751 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.223762 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.223772 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.223783 | orchestrator | 2026-04-17 05:57:04.223794 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-04-17 05:57:04.223804 | orchestrator | Friday 17 April 2026 05:56:56 +0000 (0:00:00.867) 0:01:59.616 ********** 2026-04-17 05:57:04.223815 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.223826 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.223836 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.223847 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.223857 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.223868 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.223878 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.223889 | orchestrator | 2026-04-17 05:57:04.223900 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-04-17 05:57:04.223911 | orchestrator | Friday 17 April 2026 05:56:58 +0000 (0:00:01.166) 0:02:00.782 ********** 2026-04-17 05:57:04.223921 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.223932 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.223943 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.223953 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.223964 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.223974 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.223985 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.223995 | orchestrator | 2026-04-17 05:57:04.224006 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-04-17 05:57:04.224017 | orchestrator | Friday 17 April 2026 05:56:58 +0000 (0:00:00.882) 0:02:01.664 ********** 2026-04-17 05:57:04.224027 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.224038 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.224049 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.224059 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.224070 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.224081 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.224091 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.224102 | orchestrator | 2026-04-17 05:57:04.224112 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-04-17 05:57:04.224123 | orchestrator | Friday 17 April 2026 05:57:00 +0000 (0:00:01.229) 0:02:02.894 ********** 2026-04-17 05:57:04.224134 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.224144 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.224155 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.224165 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.224176 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.224186 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.224197 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.224207 | orchestrator | 2026-04-17 05:57:04.224218 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-04-17 05:57:04.224229 | orchestrator | Friday 17 April 2026 05:57:01 +0000 (0:00:00.871) 0:02:03.765 ********** 2026-04-17 05:57:04.224240 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.224250 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.224261 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.224272 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.224322 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.224334 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.224345 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.224356 | orchestrator | 2026-04-17 05:57:04.224367 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-04-17 05:57:04.224390 | orchestrator | Friday 17 April 2026 05:57:02 +0000 (0:00:01.187) 0:02:04.953 ********** 2026-04-17 05:57:04.224401 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.224412 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.224423 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.224433 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.224444 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.224455 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.224465 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.224476 | orchestrator | 2026-04-17 05:57:04.224487 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-04-17 05:57:04.224498 | orchestrator | Friday 17 April 2026 05:57:03 +0000 (0:00:01.184) 0:02:06.137 ********** 2026-04-17 05:57:04.224508 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:04.224519 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:04.224530 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:04.224540 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:04.224551 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:04.224562 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:04.224573 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:04.224583 | orchestrator | 2026-04-17 05:57:04.224602 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-04-17 05:57:14.607226 | orchestrator | Friday 17 April 2026 05:57:04 +0000 (0:00:00.824) 0:02:06.962 ********** 2026-04-17 05:57:14.608094 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:14.608150 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:14.608162 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:14.608176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 05:57:14.608188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 05:57:14.608199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 05:57:14.608210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 05:57:14.608221 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.608232 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 05:57:14.608243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 05:57:14.608254 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.608283 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.608294 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:14.608305 | orchestrator | 2026-04-17 05:57:14.608317 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-04-17 05:57:14.608329 | orchestrator | Friday 17 April 2026 05:57:05 +0000 (0:00:01.347) 0:02:08.309 ********** 2026-04-17 05:57:14.608340 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:14.608351 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:14.608361 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:14.608372 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.608383 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.608393 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.608404 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:14.608416 | orchestrator | 2026-04-17 05:57:14.608427 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-04-17 05:57:14.608438 | orchestrator | Friday 17 April 2026 05:57:06 +0000 (0:00:00.927) 0:02:09.236 ********** 2026-04-17 05:57:14.608471 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:14.608482 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:14.608493 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:14.608504 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.608514 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.608525 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.608535 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:14.608546 | orchestrator | 2026-04-17 05:57:14.608557 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-04-17 05:57:14.608567 | orchestrator | Friday 17 April 2026 05:57:07 +0000 (0:00:01.176) 0:02:10.412 ********** 2026-04-17 05:57:14.608578 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:14.608588 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:14.608599 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:14.608609 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.608620 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.608630 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.608641 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:14.608651 | orchestrator | 2026-04-17 05:57:14.608662 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-04-17 05:57:14.608672 | orchestrator | Friday 17 April 2026 05:57:08 +0000 (0:00:00.826) 0:02:11.238 ********** 2026-04-17 05:57:14.608683 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:14.608725 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:14.608736 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:14.608747 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.608758 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.608768 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.608778 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:14.608789 | orchestrator | 2026-04-17 05:57:14.608800 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-04-17 05:57:14.608811 | orchestrator | Friday 17 April 2026 05:57:09 +0000 (0:00:01.115) 0:02:12.354 ********** 2026-04-17 05:57:14.608822 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:14.608833 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:14.608843 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:14.608854 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.608864 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.608875 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.608886 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:14.608896 | orchestrator | 2026-04-17 05:57:14.608907 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-04-17 05:57:14.608918 | orchestrator | Friday 17 April 2026 05:57:10 +0000 (0:00:00.786) 0:02:13.140 ********** 2026-04-17 05:57:14.608929 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:14.608939 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:14.608950 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:14.608960 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.608971 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.608982 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.608992 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:14.609003 | orchestrator | 2026-04-17 05:57:14.609013 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-04-17 05:57:14.609024 | orchestrator | Friday 17 April 2026 05:57:11 +0000 (0:00:01.154) 0:02:14.295 ********** 2026-04-17 05:57:14.609065 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:14.609077 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:14.609088 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:14.609098 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:14.609109 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:57:14.609120 | orchestrator | 2026-04-17 05:57:14.609139 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-04-17 05:57:14.609150 | orchestrator | Friday 17 April 2026 05:57:13 +0000 (0:00:01.787) 0:02:16.082 ********** 2026-04-17 05:57:14.609161 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:57:14.609172 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:57:14.609183 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:57:14.609193 | orchestrator | 2026-04-17 05:57:14.609204 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-04-17 05:57:14.609215 | orchestrator | Friday 17 April 2026 05:57:13 +0000 (0:00:00.433) 0:02:16.515 ********** 2026-04-17 05:57:14.609225 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 05:57:14.609236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 05:57:14.609247 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.609258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 05:57:14.609297 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 05:57:14.609308 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.609318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 05:57:14.609329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 05:57:14.609340 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.609351 | orchestrator | 2026-04-17 05:57:14.609361 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-04-17 05:57:14.609372 | orchestrator | Friday 17 April 2026 05:57:14 +0000 (0:00:00.401) 0:02:16.916 ********** 2026-04-17 05:57:14.609385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:14.609398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:14.609410 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:14.609420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:14.609432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:14.609442 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:14.609453 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:14.609471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:14.609483 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:14.609494 | orchestrator | 2026-04-17 05:57:14.609517 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-04-17 05:57:18.161460 | orchestrator | Friday 17 April 2026 05:57:14 +0000 (0:00:00.429) 0:02:17.346 ********** 2026-04-17 05:57:18.161587 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:18.161607 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:18.161618 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:18.161629 | orchestrator | 2026-04-17 05:57:18.161641 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-04-17 05:57:18.161652 | orchestrator | Friday 17 April 2026 05:57:15 +0000 (0:00:00.738) 0:02:18.084 ********** 2026-04-17 05:57:18.161662 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:18.161673 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:18.161684 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:18.161695 | orchestrator | 2026-04-17 05:57:18.161705 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-04-17 05:57:18.161716 | orchestrator | Friday 17 April 2026 05:57:15 +0000 (0:00:00.377) 0:02:18.462 ********** 2026-04-17 05:57:18.161727 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:18.161737 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:18.161748 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:18.161758 | orchestrator | 2026-04-17 05:57:18.161769 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-04-17 05:57:18.161780 | orchestrator | Friday 17 April 2026 05:57:16 +0000 (0:00:00.331) 0:02:18.794 ********** 2026-04-17 05:57:18.161790 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:18.161801 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:18.161812 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:18.161822 | orchestrator | 2026-04-17 05:57:18.161833 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-04-17 05:57:18.161843 | orchestrator | Friday 17 April 2026 05:57:16 +0000 (0:00:00.331) 0:02:19.125 ********** 2026-04-17 05:57:18.161854 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}) 2026-04-17 05:57:18.161868 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}) 2026-04-17 05:57:18.161879 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}) 2026-04-17 05:57:18.161890 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}) 2026-04-17 05:57:18.161900 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}) 2026-04-17 05:57:18.161911 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}) 2026-04-17 05:57:18.161921 | orchestrator | 2026-04-17 05:57:18.161932 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-04-17 05:57:18.161944 | orchestrator | Friday 17 April 2026 05:57:17 +0000 (0:00:01.442) 0:02:20.567 ********** 2026-04-17 05:57:18.161962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2/osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1776398064.4447901, 'mtime': 1776398064.43879, 'ctime': 1776398064.43879, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2/osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:18.162107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08/osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1776398082.4350708, 'mtime': 1776398082.4310706, 'ctime': 1776398082.4310706, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08/osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:18.162128 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:18.162143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-b2b01680-30d5-524c-a810-0db40fd977fd/osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1776398064.5295057, 'mtime': 1776398064.5255058, 'ctime': 1776398064.5255058, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-b2b01680-30d5-524c-a810-0db40fd977fd/osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:18.162158 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0/osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1776398082.7367938, 'mtime': 1776398082.7327938, 'ctime': 1776398082.7327938, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0/osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:18.162179 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:18.162240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-690571ed-11b8-555e-b420-011f2882a19f/osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1776398064.097679, 'mtime': 1776398064.093679, 'ctime': 1776398064.093679, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-690571ed-11b8-555e-b420-011f2882a19f/osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.110855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d/osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1776398082.5009625, 'mtime': 1776398082.4949625, 'ctime': 1776398082.4949625, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d/osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.110975 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:20.110998 | orchestrator | 2026-04-17 05:57:20.111018 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-04-17 05:57:20.111067 | orchestrator | Friday 17 April 2026 05:57:18 +0000 (0:00:00.446) 0:02:21.014 ********** 2026-04-17 05:57:20.111086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 05:57:20.111100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 05:57:20.111111 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:20.111122 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 05:57:20.111133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 05:57:20.111144 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:20.111154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 05:57:20.111165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 05:57:20.111176 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:20.111186 | orchestrator | 2026-04-17 05:57:20.111198 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-04-17 05:57:20.111210 | orchestrator | Friday 17 April 2026 05:57:18 +0000 (0:00:00.411) 0:02:21.425 ********** 2026-04-17 05:57:20.111223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111309 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:20.111330 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111374 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111393 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:20.111406 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111419 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111442 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:20.111455 | orchestrator | 2026-04-17 05:57:20.111467 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-04-17 05:57:20.111480 | orchestrator | Friday 17 April 2026 05:57:19 +0000 (0:00:00.438) 0:02:21.863 ********** 2026-04-17 05:57:20.111492 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'})  2026-04-17 05:57:20.111504 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'})  2026-04-17 05:57:20.111516 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:20.111528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'})  2026-04-17 05:57:20.111542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'})  2026-04-17 05:57:20.111562 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:20.111582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'})  2026-04-17 05:57:20.111603 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'})  2026-04-17 05:57:20.111622 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:20.111642 | orchestrator | 2026-04-17 05:57:20.111656 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-04-17 05:57:20.111669 | orchestrator | Friday 17 April 2026 05:57:19 +0000 (0:00:00.675) 0:02:22.539 ********** 2026-04-17 05:57:20.111681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ba7178ba-163b-58b0-89b4-3a73c9468ec2', 'data_vg': 'ceph-ba7178ba-163b-58b0-89b4-3a73c9468ec2'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-34b96a2b-74e9-5d3b-a409-9327cdd3ba08', 'data_vg': 'ceph-34b96a2b-74e9-5d3b-a409-9327cdd3ba08'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111707 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:20.111719 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-b2b01680-30d5-524c-a810-0db40fd977fd', 'data_vg': 'ceph-b2b01680-30d5-524c-a810-0db40fd977fd'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-1504e56e-19fb-5fe8-bf47-cc017f2297d0', 'data_vg': 'ceph-1504e56e-19fb-5fe8-bf47-cc017f2297d0'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111748 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:20.111759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-690571ed-11b8-555e-b420-011f2882a19f', 'data_vg': 'ceph-690571ed-11b8-555e-b420-011f2882a19f'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:20.111778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-58d5b32d-9713-5f24-a4e2-aea701c9df8d', 'data_vg': 'ceph-58d5b32d-9713-5f24-a4e2-aea701c9df8d'}, 'ansible_loop_var': 'item'})  2026-04-17 05:57:24.851398 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:24.851932 | orchestrator | 2026-04-17 05:57:24.851969 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-04-17 05:57:24.851984 | orchestrator | Friday 17 April 2026 05:57:20 +0000 (0:00:00.417) 0:02:22.956 ********** 2026-04-17 05:57:24.851997 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:24.852010 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:24.852023 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:24.852036 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:24.852049 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:24.852061 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:24.852073 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:24.852085 | orchestrator | 2026-04-17 05:57:24.852098 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-04-17 05:57:24.852111 | orchestrator | Friday 17 April 2026 05:57:20 +0000 (0:00:00.780) 0:02:23.737 ********** 2026-04-17 05:57:24.852123 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:24.852135 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:24.852147 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:24.852159 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:24.852173 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 05:57:24.852185 | orchestrator | 2026-04-17 05:57:24.852198 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-04-17 05:57:24.852211 | orchestrator | Friday 17 April 2026 05:57:22 +0000 (0:00:01.797) 0:02:25.535 ********** 2026-04-17 05:57:24.852224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852333 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:24.852344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852396 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:24.852407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852497 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:24.852507 | orchestrator | 2026-04-17 05:57:24.852518 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-04-17 05:57:24.852529 | orchestrator | Friday 17 April 2026 05:57:23 +0000 (0:00:00.509) 0:02:26.044 ********** 2026-04-17 05:57:24.852539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852668 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:24.852679 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:24.852690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852743 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:24.852754 | orchestrator | 2026-04-17 05:57:24.852764 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-04-17 05:57:24.852775 | orchestrator | Friday 17 April 2026 05:57:24 +0000 (0:00:00.753) 0:02:26.798 ********** 2026-04-17 05:57:24.852786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852847 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:24.852858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852911 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:24.852928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 05:57:24.852981 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:24.852992 | orchestrator | 2026-04-17 05:57:24.853003 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-04-17 05:57:24.853013 | orchestrator | Friday 17 April 2026 05:57:24 +0000 (0:00:00.459) 0:02:27.257 ********** 2026-04-17 05:57:24.853024 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:24.853035 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:24.853052 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:32.258455 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:32.258595 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:32.258622 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:32.258641 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:32.258660 | orchestrator | 2026-04-17 05:57:32.258680 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-04-17 05:57:32.258700 | orchestrator | Friday 17 April 2026 05:57:25 +0000 (0:00:00.786) 0:02:28.043 ********** 2026-04-17 05:57:32.258719 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:32.258738 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:32.258756 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:32.258774 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:32.258792 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:32.258810 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:32.258828 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:32.258845 | orchestrator | 2026-04-17 05:57:32.258863 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-04-17 05:57:32.258882 | orchestrator | Friday 17 April 2026 05:57:26 +0000 (0:00:01.136) 0:02:29.179 ********** 2026-04-17 05:57:32.258901 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:32.258920 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:32.258939 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:32.258986 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:32.259007 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:32.259026 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:32.259045 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:32.259063 | orchestrator | 2026-04-17 05:57:32.259081 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-04-17 05:57:32.259101 | orchestrator | Friday 17 April 2026 05:57:27 +0000 (0:00:00.778) 0:02:29.957 ********** 2026-04-17 05:57:32.259121 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:32.259139 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:32.259157 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:32.259176 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:32.259194 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:32.259213 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:32.259232 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:32.259277 | orchestrator | 2026-04-17 05:57:32.259296 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-04-17 05:57:32.259316 | orchestrator | Friday 17 April 2026 05:57:28 +0000 (0:00:01.180) 0:02:31.138 ********** 2026-04-17 05:57:32.259333 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:32.259349 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:32.259367 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:32.259386 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:32.259403 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:32.259422 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:32.259440 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:32.259459 | orchestrator | 2026-04-17 05:57:32.259478 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-04-17 05:57:32.259497 | orchestrator | Friday 17 April 2026 05:57:29 +0000 (0:00:01.096) 0:02:32.235 ********** 2026-04-17 05:57:32.259515 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:32.259534 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:32.259552 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:32.259569 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:32.259588 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:32.259605 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:32.259624 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:32.259642 | orchestrator | 2026-04-17 05:57:32.259661 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-04-17 05:57:32.259678 | orchestrator | Friday 17 April 2026 05:57:30 +0000 (0:00:00.793) 0:02:33.028 ********** 2026-04-17 05:57:32.259696 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:32.259715 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:32.259733 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:32.259751 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:32.259769 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:32.259787 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:32.259805 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:32.259822 | orchestrator | 2026-04-17 05:57:32.259840 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-04-17 05:57:32.259859 | orchestrator | Friday 17 April 2026 05:57:31 +0000 (0:00:01.136) 0:02:34.165 ********** 2026-04-17 05:57:32.259878 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:32.259916 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:32.259938 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:32.259973 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:32.259991 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:32.260013 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:32.260031 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:32.260075 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:32.260095 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:32.260113 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:32.260131 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:32.260150 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:32.260169 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:32.260188 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:32.260205 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:32.260223 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:32.260264 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:32.260284 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:32.260301 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:32.260319 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:32.260337 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:32.260356 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:32.260374 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:32.260392 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:32.260410 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:32.260449 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:32.260469 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:32.260487 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:32.260505 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:32.260523 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:32.260553 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:34.439709 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:34.439786 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:34.439796 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:34.439804 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:34.439812 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:34.439819 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:34.439827 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:34.439833 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:34.439839 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:34.439845 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:34.439851 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:34.439856 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:34.439862 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:34.439868 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:34.439891 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:34.439897 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:34.439902 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:34.439908 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:34.439913 | orchestrator | 2026-04-17 05:57:34.439920 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-04-17 05:57:34.439926 | orchestrator | Friday 17 April 2026 05:57:32 +0000 (0:00:01.104) 0:02:35.269 ********** 2026-04-17 05:57:34.439932 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:34.439938 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:34.439943 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:34.439959 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:34.439965 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:34.439970 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:34.439976 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:34.439982 | orchestrator | 2026-04-17 05:57:34.439988 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-04-17 05:57:34.439993 | orchestrator | Friday 17 April 2026 05:57:33 +0000 (0:00:01.172) 0:02:36.442 ********** 2026-04-17 05:57:34.439999 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:34.440005 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:34.440010 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:34.440016 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:34.440034 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:34.440040 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:34.440046 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:34.440051 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:34.440057 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:34.440063 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:34.440068 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:34.440074 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:34.440084 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:34.440089 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:34.440095 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:34.440101 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:34.440106 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:34.440112 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:34.440117 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:34.440123 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:34.440129 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:34.440134 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:34.440143 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:34.440149 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:34.440155 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:34.440160 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:34.440166 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:34.440171 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:34.440181 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:50.952146 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:50.952348 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:50.952377 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:50.952395 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:50.952445 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:50.952466 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:50.952483 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:50.952498 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:50.952516 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-17 05:57:50.952527 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-17 05:57:50.952544 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:50.952562 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-17 05:57:50.952579 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:50.952594 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:50.952612 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:50.952627 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:50.952644 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-17 05:57:50.952660 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-17 05:57:50.952695 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-17 05:57:50.952713 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:50.952731 | orchestrator | 2026-04-17 05:57:50.952750 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-04-17 05:57:50.952768 | orchestrator | Friday 17 April 2026 05:57:34 +0000 (0:00:01.139) 0:02:37.582 ********** 2026-04-17 05:57:50.952784 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:50.952801 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:50.952819 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:50.952837 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:50.952854 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:50.952871 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:50.952888 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:50.952905 | orchestrator | 2026-04-17 05:57:50.952923 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-04-17 05:57:50.952940 | orchestrator | Friday 17 April 2026 05:57:35 +0000 (0:00:01.154) 0:02:38.736 ********** 2026-04-17 05:57:50.952958 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:50.952975 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:50.952991 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:50.953019 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:50.953036 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:50.953053 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:50.953069 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:50.953085 | orchestrator | 2026-04-17 05:57:50.953103 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-04-17 05:57:50.953143 | orchestrator | Friday 17 April 2026 05:57:36 +0000 (0:00:00.826) 0:02:39.563 ********** 2026-04-17 05:57:50.953162 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:50.953179 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:50.953194 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:50.953210 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:50.953273 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:50.953290 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:50.953307 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:50.953323 | orchestrator | 2026-04-17 05:57:50.953341 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-17 05:57:50.953357 | orchestrator | Friday 17 April 2026 05:57:38 +0000 (0:00:01.946) 0:02:41.509 ********** 2026-04-17 05:57:50.953372 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-17 05:57:50.953390 | orchestrator | 2026-04-17 05:57:50.953406 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-04-17 05:57:50.953428 | orchestrator | Friday 17 April 2026 05:57:40 +0000 (0:00:02.045) 0:02:43.555 ********** 2026-04-17 05:57:50.953446 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-17 05:57:50.953462 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-17 05:57:50.953476 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-17 05:57:50.953490 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-17 05:57:50.953503 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-17 05:57:50.953515 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-17 05:57:50.953528 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-17 05:57:50.953540 | orchestrator | 2026-04-17 05:57:50.953554 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-04-17 05:57:50.953568 | orchestrator | Friday 17 April 2026 05:57:41 +0000 (0:00:00.974) 0:02:44.529 ********** 2026-04-17 05:57:50.953580 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:50.953593 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:50.953607 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:50.953619 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:50.953633 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:50.953647 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:50.953660 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:50.953673 | orchestrator | 2026-04-17 05:57:50.953685 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-04-17 05:57:50.953698 | orchestrator | Friday 17 April 2026 05:57:42 +0000 (0:00:01.213) 0:02:45.742 ********** 2026-04-17 05:57:50.953711 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:50.953723 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:50.953735 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:50.953746 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:50.953759 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:50.953771 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:50.953784 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:50.953797 | orchestrator | 2026-04-17 05:57:50.953823 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-04-17 05:57:50.953836 | orchestrator | Friday 17 April 2026 05:57:43 +0000 (0:00:00.857) 0:02:46.600 ********** 2026-04-17 05:57:50.953848 | orchestrator | ok: [testbed-node-1] 2026-04-17 05:57:50.953861 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:57:50.953873 | orchestrator | ok: [testbed-node-2] 2026-04-17 05:57:50.953900 | orchestrator | ok: [testbed-node-3] 2026-04-17 05:57:50.953913 | orchestrator | ok: [testbed-node-4] 2026-04-17 05:57:50.953926 | orchestrator | ok: [testbed-node-5] 2026-04-17 05:57:50.953939 | orchestrator | ok: [testbed-manager] 2026-04-17 05:57:50.953951 | orchestrator | 2026-04-17 05:57:50.953964 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-04-17 05:57:50.953976 | orchestrator | Friday 17 April 2026 05:57:45 +0000 (0:00:01.816) 0:02:48.417 ********** 2026-04-17 05:57:50.954000 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:50.954013 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:50.954104 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:50.954118 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:50.954132 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:50.954148 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:50.954163 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:50.954176 | orchestrator | 2026-04-17 05:57:50.954189 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-17 05:57:50.954203 | orchestrator | Friday 17 April 2026 05:57:47 +0000 (0:00:01.737) 0:02:50.155 ********** 2026-04-17 05:57:50.954239 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:50.954253 | orchestrator | skipping: [testbed-node-1] 2026-04-17 05:57:50.954268 | orchestrator | skipping: [testbed-node-2] 2026-04-17 05:57:50.954282 | orchestrator | skipping: [testbed-node-3] 2026-04-17 05:57:50.954296 | orchestrator | skipping: [testbed-node-4] 2026-04-17 05:57:50.954309 | orchestrator | skipping: [testbed-node-5] 2026-04-17 05:57:50.954322 | orchestrator | skipping: [testbed-manager] 2026-04-17 05:57:50.954336 | orchestrator | 2026-04-17 05:57:50.954349 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-04-17 05:57:50.954363 | orchestrator | Friday 17 April 2026 05:57:49 +0000 (0:00:01.717) 0:02:51.872 ********** 2026-04-17 05:57:50.954377 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:57:50.954391 | orchestrator | 2026-04-17 05:57:50.954405 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-04-17 05:57:50.954418 | orchestrator | Friday 17 April 2026 05:57:50 +0000 (0:00:01.625) 0:02:53.498 ********** 2026-04-17 05:57:50.954432 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:57:50.954446 | orchestrator | 2026-04-17 05:57:50.954478 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-04-17 05:58:10.610238 | orchestrator | 2026-04-17 05:58:10.610344 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 05:58:10.610359 | orchestrator | Friday 17 April 2026 05:57:51 +0000 (0:00:00.806) 0:02:54.305 ********** 2026-04-17 05:58:10.610370 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610381 | orchestrator | 2026-04-17 05:58:10.610392 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 05:58:10.610402 | orchestrator | Friday 17 April 2026 05:57:52 +0000 (0:00:00.497) 0:02:54.802 ********** 2026-04-17 05:58:10.610411 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610421 | orchestrator | 2026-04-17 05:58:10.610431 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-04-17 05:58:10.610441 | orchestrator | Friday 17 April 2026 05:57:52 +0000 (0:00:00.586) 0:02:55.389 ********** 2026-04-17 05:58:10.610453 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-17 05:58:10.610490 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-17 05:58:10.610501 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-17 05:58:10.610511 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-17 05:58:10.610523 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-17 05:58:10.610534 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}])  2026-04-17 05:58:10.610545 | orchestrator | 2026-04-17 05:58:10.610569 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-17 05:58:10.610580 | orchestrator | 2026-04-17 05:58:10.610590 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-17 05:58:10.610599 | orchestrator | Friday 17 April 2026 05:58:02 +0000 (0:00:09.610) 0:03:04.999 ********** 2026-04-17 05:58:10.610609 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610619 | orchestrator | 2026-04-17 05:58:10.610628 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-17 05:58:10.610638 | orchestrator | Friday 17 April 2026 05:58:02 +0000 (0:00:00.506) 0:03:05.505 ********** 2026-04-17 05:58:10.610647 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610657 | orchestrator | 2026-04-17 05:58:10.610667 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-17 05:58:10.610676 | orchestrator | Friday 17 April 2026 05:58:02 +0000 (0:00:00.152) 0:03:05.657 ********** 2026-04-17 05:58:10.610686 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:10.610696 | orchestrator | 2026-04-17 05:58:10.610706 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-17 05:58:10.610716 | orchestrator | Friday 17 April 2026 05:58:03 +0000 (0:00:00.153) 0:03:05.811 ********** 2026-04-17 05:58:10.610726 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610735 | orchestrator | 2026-04-17 05:58:10.610745 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 05:58:10.610755 | orchestrator | Friday 17 April 2026 05:58:03 +0000 (0:00:00.149) 0:03:05.961 ********** 2026-04-17 05:58:10.610765 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-17 05:58:10.610775 | orchestrator | 2026-04-17 05:58:10.610784 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 05:58:10.610809 | orchestrator | Friday 17 April 2026 05:58:03 +0000 (0:00:00.235) 0:03:06.196 ********** 2026-04-17 05:58:10.610827 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610837 | orchestrator | 2026-04-17 05:58:10.610847 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 05:58:10.610856 | orchestrator | Friday 17 April 2026 05:58:03 +0000 (0:00:00.537) 0:03:06.734 ********** 2026-04-17 05:58:10.610866 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610875 | orchestrator | 2026-04-17 05:58:10.610885 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 05:58:10.610894 | orchestrator | Friday 17 April 2026 05:58:04 +0000 (0:00:00.171) 0:03:06.906 ********** 2026-04-17 05:58:10.610904 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610914 | orchestrator | 2026-04-17 05:58:10.610923 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 05:58:10.610933 | orchestrator | Friday 17 April 2026 05:58:04 +0000 (0:00:00.487) 0:03:07.394 ********** 2026-04-17 05:58:10.610943 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610952 | orchestrator | 2026-04-17 05:58:10.610962 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 05:58:10.610972 | orchestrator | Friday 17 April 2026 05:58:05 +0000 (0:00:00.583) 0:03:07.977 ********** 2026-04-17 05:58:10.610981 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.610990 | orchestrator | 2026-04-17 05:58:10.611000 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 05:58:10.611010 | orchestrator | Friday 17 April 2026 05:58:05 +0000 (0:00:00.206) 0:03:08.184 ********** 2026-04-17 05:58:10.611019 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.611029 | orchestrator | 2026-04-17 05:58:10.611038 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 05:58:10.611049 | orchestrator | Friday 17 April 2026 05:58:05 +0000 (0:00:00.174) 0:03:08.358 ********** 2026-04-17 05:58:10.611058 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:10.611068 | orchestrator | 2026-04-17 05:58:10.611077 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 05:58:10.611087 | orchestrator | Friday 17 April 2026 05:58:05 +0000 (0:00:00.170) 0:03:08.529 ********** 2026-04-17 05:58:10.611097 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.611107 | orchestrator | 2026-04-17 05:58:10.611116 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 05:58:10.611126 | orchestrator | Friday 17 April 2026 05:58:05 +0000 (0:00:00.150) 0:03:08.680 ********** 2026-04-17 05:58:10.611136 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:58:10.611146 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:58:10.611156 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:58:10.611165 | orchestrator | 2026-04-17 05:58:10.611175 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 05:58:10.611185 | orchestrator | Friday 17 April 2026 05:58:06 +0000 (0:00:00.773) 0:03:09.453 ********** 2026-04-17 05:58:10.611214 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:10.611224 | orchestrator | 2026-04-17 05:58:10.611234 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 05:58:10.611243 | orchestrator | Friday 17 April 2026 05:58:07 +0000 (0:00:00.320) 0:03:09.774 ********** 2026-04-17 05:58:10.611253 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:58:10.611263 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:58:10.611272 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:58:10.611282 | orchestrator | 2026-04-17 05:58:10.611292 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 05:58:10.611301 | orchestrator | Friday 17 April 2026 05:58:09 +0000 (0:00:01.975) 0:03:11.750 ********** 2026-04-17 05:58:10.611311 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 05:58:10.611327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 05:58:10.611337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 05:58:10.611347 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:10.611357 | orchestrator | 2026-04-17 05:58:10.611366 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 05:58:10.611380 | orchestrator | Friday 17 April 2026 05:58:09 +0000 (0:00:00.439) 0:03:12.190 ********** 2026-04-17 05:58:10.611392 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 05:58:10.611403 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 05:58:10.611414 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 05:58:10.611423 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:10.611433 | orchestrator | 2026-04-17 05:58:10.611443 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 05:58:10.611453 | orchestrator | Friday 17 April 2026 05:58:10 +0000 (0:00:01.098) 0:03:13.288 ********** 2026-04-17 05:58:10.611470 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.404623 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.404747 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.404766 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.404780 | orchestrator | 2026-04-17 05:58:15.404792 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 05:58:15.404804 | orchestrator | Friday 17 April 2026 05:58:10 +0000 (0:00:00.179) 0:03:13.467 ********** 2026-04-17 05:58:15.404818 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'aa031f9a4b08', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 05:58:07.608017', 'end': '2026-04-17 05:58:07.659000', 'delta': '0:00:00.050983', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['aa031f9a4b08'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 05:58:15.404856 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9f8a3fd74f0b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 05:58:08.230108', 'end': '2026-04-17 05:58:08.274505', 'delta': '0:00:00.044397', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9f8a3fd74f0b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 05:58:15.404882 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f2e2f728469b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 05:58:08.805519', 'end': '2026-04-17 05:58:08.861927', 'delta': '0:00:00.056408', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2e2f728469b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 05:58:15.404893 | orchestrator | 2026-04-17 05:58:15.404905 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 05:58:15.404916 | orchestrator | Friday 17 April 2026 05:58:10 +0000 (0:00:00.221) 0:03:13.689 ********** 2026-04-17 05:58:15.404927 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:15.404939 | orchestrator | 2026-04-17 05:58:15.404949 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 05:58:15.404960 | orchestrator | Friday 17 April 2026 05:58:11 +0000 (0:00:00.293) 0:03:13.982 ********** 2026-04-17 05:58:15.404971 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.404982 | orchestrator | 2026-04-17 05:58:15.404993 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 05:58:15.405004 | orchestrator | Friday 17 April 2026 05:58:11 +0000 (0:00:00.687) 0:03:14.670 ********** 2026-04-17 05:58:15.405015 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:15.405026 | orchestrator | 2026-04-17 05:58:15.405036 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 05:58:15.405047 | orchestrator | Friday 17 April 2026 05:58:12 +0000 (0:00:00.546) 0:03:15.216 ********** 2026-04-17 05:58:15.405075 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-17 05:58:15.405087 | orchestrator | 2026-04-17 05:58:15.405098 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 05:58:15.405109 | orchestrator | Friday 17 April 2026 05:58:13 +0000 (0:00:01.162) 0:03:16.378 ********** 2026-04-17 05:58:15.405119 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:15.405130 | orchestrator | 2026-04-17 05:58:15.405141 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 05:58:15.405153 | orchestrator | Friday 17 April 2026 05:58:13 +0000 (0:00:00.153) 0:03:16.532 ********** 2026-04-17 05:58:15.405166 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405179 | orchestrator | 2026-04-17 05:58:15.405226 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 05:58:15.405239 | orchestrator | Friday 17 April 2026 05:58:13 +0000 (0:00:00.177) 0:03:16.710 ********** 2026-04-17 05:58:15.405252 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405264 | orchestrator | 2026-04-17 05:58:15.405276 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 05:58:15.405289 | orchestrator | Friday 17 April 2026 05:58:14 +0000 (0:00:00.253) 0:03:16.964 ********** 2026-04-17 05:58:15.405301 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405322 | orchestrator | 2026-04-17 05:58:15.405333 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 05:58:15.405344 | orchestrator | Friday 17 April 2026 05:58:14 +0000 (0:00:00.139) 0:03:17.103 ********** 2026-04-17 05:58:15.405354 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405365 | orchestrator | 2026-04-17 05:58:15.405376 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 05:58:15.405386 | orchestrator | Friday 17 April 2026 05:58:14 +0000 (0:00:00.145) 0:03:17.249 ********** 2026-04-17 05:58:15.405397 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405408 | orchestrator | 2026-04-17 05:58:15.405419 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 05:58:15.405430 | orchestrator | Friday 17 April 2026 05:58:14 +0000 (0:00:00.141) 0:03:17.390 ********** 2026-04-17 05:58:15.405441 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405452 | orchestrator | 2026-04-17 05:58:15.405462 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 05:58:15.405473 | orchestrator | Friday 17 April 2026 05:58:14 +0000 (0:00:00.140) 0:03:17.530 ********** 2026-04-17 05:58:15.405484 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405495 | orchestrator | 2026-04-17 05:58:15.405505 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 05:58:15.405516 | orchestrator | Friday 17 April 2026 05:58:14 +0000 (0:00:00.153) 0:03:17.684 ********** 2026-04-17 05:58:15.405527 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405538 | orchestrator | 2026-04-17 05:58:15.405549 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 05:58:15.405560 | orchestrator | Friday 17 April 2026 05:58:15 +0000 (0:00:00.191) 0:03:17.876 ********** 2026-04-17 05:58:15.405571 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.405582 | orchestrator | 2026-04-17 05:58:15.405592 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 05:58:15.405603 | orchestrator | Friday 17 April 2026 05:58:15 +0000 (0:00:00.141) 0:03:18.017 ********** 2026-04-17 05:58:15.405614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:58:15.405631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:58:15.405643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:58:15.405656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 05:58:15.405683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:58:15.674648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:58:15.674731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:58:15.674763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1d6df01d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 05:58:15.674775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:58:15.674784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 05:58:15.674809 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:15.674818 | orchestrator | 2026-04-17 05:58:15.674827 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 05:58:15.674836 | orchestrator | Friday 17 April 2026 05:58:15 +0000 (0:00:00.288) 0:03:18.306 ********** 2026-04-17 05:58:15.674859 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.674869 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.674878 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.674913 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.674922 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.674936 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:15.674951 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:25.879564 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1d6df01d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:25.879687 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:25.879728 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 05:58:25.879741 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:25.879754 | orchestrator | 2026-04-17 05:58:25.879766 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 05:58:25.879778 | orchestrator | Friday 17 April 2026 05:58:16 +0000 (0:00:00.595) 0:03:18.901 ********** 2026-04-17 05:58:25.879789 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:25.879801 | orchestrator | 2026-04-17 05:58:25.879813 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 05:58:25.879823 | orchestrator | Friday 17 April 2026 05:58:16 +0000 (0:00:00.505) 0:03:19.407 ********** 2026-04-17 05:58:25.879834 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:25.879846 | orchestrator | 2026-04-17 05:58:25.879857 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 05:58:25.879888 | orchestrator | Friday 17 April 2026 05:58:16 +0000 (0:00:00.149) 0:03:19.557 ********** 2026-04-17 05:58:25.879900 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:25.879910 | orchestrator | 2026-04-17 05:58:25.879921 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 05:58:25.879932 | orchestrator | Friday 17 April 2026 05:58:17 +0000 (0:00:00.535) 0:03:20.093 ********** 2026-04-17 05:58:25.879943 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:25.879953 | orchestrator | 2026-04-17 05:58:25.879964 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 05:58:25.879975 | orchestrator | Friday 17 April 2026 05:58:17 +0000 (0:00:00.141) 0:03:20.234 ********** 2026-04-17 05:58:25.879986 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:25.879997 | orchestrator | 2026-04-17 05:58:25.880008 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 05:58:25.880018 | orchestrator | Friday 17 April 2026 05:58:17 +0000 (0:00:00.240) 0:03:20.475 ********** 2026-04-17 05:58:25.880031 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:25.880044 | orchestrator | 2026-04-17 05:58:25.880056 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 05:58:25.880068 | orchestrator | Friday 17 April 2026 05:58:17 +0000 (0:00:00.168) 0:03:20.644 ********** 2026-04-17 05:58:25.880080 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:58:25.880093 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 05:58:25.880105 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 05:58:25.880117 | orchestrator | 2026-04-17 05:58:25.880130 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 05:58:25.880142 | orchestrator | Friday 17 April 2026 05:58:18 +0000 (0:00:00.747) 0:03:21.391 ********** 2026-04-17 05:58:25.880155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 05:58:25.880195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 05:58:25.880208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 05:58:25.880220 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:25.880232 | orchestrator | 2026-04-17 05:58:25.880245 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 05:58:25.880257 | orchestrator | Friday 17 April 2026 05:58:18 +0000 (0:00:00.176) 0:03:21.567 ********** 2026-04-17 05:58:25.880279 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:25.880291 | orchestrator | 2026-04-17 05:58:25.880303 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 05:58:25.880316 | orchestrator | Friday 17 April 2026 05:58:18 +0000 (0:00:00.152) 0:03:21.720 ********** 2026-04-17 05:58:25.880328 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:58:25.880341 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:58:25.880361 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:58:25.880374 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 05:58:25.880387 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 05:58:25.880398 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 05:58:25.880409 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 05:58:25.880420 | orchestrator | 2026-04-17 05:58:25.880431 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 05:58:25.880441 | orchestrator | Friday 17 April 2026 05:58:20 +0000 (0:00:01.260) 0:03:22.981 ********** 2026-04-17 05:58:25.880452 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:58:25.880464 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:58:25.880474 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:58:25.880485 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 05:58:25.880496 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 05:58:25.880507 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 05:58:25.880517 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 05:58:25.880528 | orchestrator | 2026-04-17 05:58:25.880539 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-17 05:58:25.880550 | orchestrator | Friday 17 April 2026 05:58:22 +0000 (0:00:01.873) 0:03:24.855 ********** 2026-04-17 05:58:25.880560 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-17 05:58:25.880571 | orchestrator | 2026-04-17 05:58:25.880582 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-17 05:58:25.880593 | orchestrator | Friday 17 April 2026 05:58:24 +0000 (0:00:02.118) 0:03:26.974 ********** 2026-04-17 05:58:25.880603 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:25.880616 | orchestrator | 2026-04-17 05:58:25.880636 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-17 05:58:25.880653 | orchestrator | Friday 17 April 2026 05:58:24 +0000 (0:00:00.264) 0:03:27.238 ********** 2026-04-17 05:58:25.880673 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:25.880692 | orchestrator | 2026-04-17 05:58:25.880710 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-17 05:58:25.880728 | orchestrator | Friday 17 April 2026 05:58:24 +0000 (0:00:00.128) 0:03:27.367 ********** 2026-04-17 05:58:25.880747 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-17 05:58:25.880764 | orchestrator | 2026-04-17 05:58:25.880785 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-17 05:58:25.880816 | orchestrator | Friday 17 April 2026 05:58:25 +0000 (0:00:01.255) 0:03:28.622 ********** 2026-04-17 05:58:52.397298 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.397421 | orchestrator | 2026-04-17 05:58:52.397437 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-17 05:58:52.397449 | orchestrator | Friday 17 April 2026 05:58:26 +0000 (0:00:00.154) 0:03:28.776 ********** 2026-04-17 05:58:52.397487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:58:52.397499 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 05:58:52.397512 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 05:58:52.397522 | orchestrator | 2026-04-17 05:58:52.397533 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-17 05:58:52.397544 | orchestrator | Friday 17 April 2026 05:58:27 +0000 (0:00:01.478) 0:03:30.254 ********** 2026-04-17 05:58:52.397555 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-04-17 05:58:52.397565 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-04-17 05:58:52.397578 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-04-17 05:58:52.397589 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-04-17 05:58:52.397599 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-04-17 05:58:52.397610 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-04-17 05:58:52.397621 | orchestrator | 2026-04-17 05:58:52.397632 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-17 05:58:52.397643 | orchestrator | Friday 17 April 2026 05:58:39 +0000 (0:00:12.140) 0:03:42.395 ********** 2026-04-17 05:58:52.397653 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:58:52.397664 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:58:52.397675 | orchestrator | 2026-04-17 05:58:52.397686 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-17 05:58:52.397696 | orchestrator | Friday 17 April 2026 05:58:42 +0000 (0:00:02.957) 0:03:45.352 ********** 2026-04-17 05:58:52.397707 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:58:52.397717 | orchestrator | 2026-04-17 05:58:52.397728 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 05:58:52.397738 | orchestrator | Friday 17 April 2026 05:58:44 +0000 (0:00:01.488) 0:03:46.841 ********** 2026-04-17 05:58:52.397763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-17 05:58:52.397774 | orchestrator | 2026-04-17 05:58:52.397785 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 05:58:52.397795 | orchestrator | Friday 17 April 2026 05:58:44 +0000 (0:00:00.605) 0:03:47.446 ********** 2026-04-17 05:58:52.397806 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-17 05:58:52.397817 | orchestrator | 2026-04-17 05:58:52.397827 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 05:58:52.397838 | orchestrator | Friday 17 April 2026 05:58:45 +0000 (0:00:00.579) 0:03:48.026 ********** 2026-04-17 05:58:52.397848 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:52.397859 | orchestrator | 2026-04-17 05:58:52.397870 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 05:58:52.397880 | orchestrator | Friday 17 April 2026 05:58:46 +0000 (0:00:00.929) 0:03:48.956 ********** 2026-04-17 05:58:52.397891 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.397901 | orchestrator | 2026-04-17 05:58:52.397912 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 05:58:52.397923 | orchestrator | Friday 17 April 2026 05:58:46 +0000 (0:00:00.147) 0:03:49.103 ********** 2026-04-17 05:58:52.397934 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.397945 | orchestrator | 2026-04-17 05:58:52.397955 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 05:58:52.397966 | orchestrator | Friday 17 April 2026 05:58:46 +0000 (0:00:00.139) 0:03:49.242 ********** 2026-04-17 05:58:52.397984 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.397995 | orchestrator | 2026-04-17 05:58:52.398006 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 05:58:52.398080 | orchestrator | Friday 17 April 2026 05:58:46 +0000 (0:00:00.129) 0:03:49.372 ********** 2026-04-17 05:58:52.398125 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:52.398158 | orchestrator | 2026-04-17 05:58:52.398170 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 05:58:52.398180 | orchestrator | Friday 17 April 2026 05:58:47 +0000 (0:00:00.567) 0:03:49.940 ********** 2026-04-17 05:58:52.398191 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398202 | orchestrator | 2026-04-17 05:58:52.398213 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 05:58:52.398223 | orchestrator | Friday 17 April 2026 05:58:47 +0000 (0:00:00.131) 0:03:50.071 ********** 2026-04-17 05:58:52.398234 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398245 | orchestrator | 2026-04-17 05:58:52.398255 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 05:58:52.398266 | orchestrator | Friday 17 April 2026 05:58:47 +0000 (0:00:00.145) 0:03:50.217 ********** 2026-04-17 05:58:52.398277 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:52.398288 | orchestrator | 2026-04-17 05:58:52.398299 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 05:58:52.398309 | orchestrator | Friday 17 April 2026 05:58:48 +0000 (0:00:00.602) 0:03:50.819 ********** 2026-04-17 05:58:52.398320 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:52.398331 | orchestrator | 2026-04-17 05:58:52.398449 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 05:58:52.398467 | orchestrator | Friday 17 April 2026 05:58:48 +0000 (0:00:00.601) 0:03:51.421 ********** 2026-04-17 05:58:52.398478 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398489 | orchestrator | 2026-04-17 05:58:52.398500 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 05:58:52.398511 | orchestrator | Friday 17 April 2026 05:58:48 +0000 (0:00:00.147) 0:03:51.568 ********** 2026-04-17 05:58:52.398521 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:52.398532 | orchestrator | 2026-04-17 05:58:52.398543 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 05:58:52.398554 | orchestrator | Friday 17 April 2026 05:58:48 +0000 (0:00:00.167) 0:03:51.736 ********** 2026-04-17 05:58:52.398565 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398575 | orchestrator | 2026-04-17 05:58:52.398586 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 05:58:52.398597 | orchestrator | Friday 17 April 2026 05:58:49 +0000 (0:00:00.149) 0:03:51.885 ********** 2026-04-17 05:58:52.398608 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398618 | orchestrator | 2026-04-17 05:58:52.398629 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 05:58:52.398640 | orchestrator | Friday 17 April 2026 05:58:49 +0000 (0:00:00.145) 0:03:52.031 ********** 2026-04-17 05:58:52.398651 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398661 | orchestrator | 2026-04-17 05:58:52.398672 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 05:58:52.398683 | orchestrator | Friday 17 April 2026 05:58:49 +0000 (0:00:00.473) 0:03:52.504 ********** 2026-04-17 05:58:52.398693 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398704 | orchestrator | 2026-04-17 05:58:52.398714 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 05:58:52.398725 | orchestrator | Friday 17 April 2026 05:58:49 +0000 (0:00:00.146) 0:03:52.651 ********** 2026-04-17 05:58:52.398736 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398747 | orchestrator | 2026-04-17 05:58:52.398757 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 05:58:52.398768 | orchestrator | Friday 17 April 2026 05:58:50 +0000 (0:00:00.149) 0:03:52.800 ********** 2026-04-17 05:58:52.398789 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:52.398800 | orchestrator | 2026-04-17 05:58:52.398811 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 05:58:52.398822 | orchestrator | Friday 17 April 2026 05:58:50 +0000 (0:00:00.173) 0:03:52.973 ********** 2026-04-17 05:58:52.398832 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:52.398843 | orchestrator | 2026-04-17 05:58:52.398854 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 05:58:52.398872 | orchestrator | Friday 17 April 2026 05:58:50 +0000 (0:00:00.195) 0:03:53.169 ********** 2026-04-17 05:58:52.398883 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:58:52.398894 | orchestrator | 2026-04-17 05:58:52.398904 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 05:58:52.398915 | orchestrator | Friday 17 April 2026 05:58:50 +0000 (0:00:00.247) 0:03:53.416 ********** 2026-04-17 05:58:52.398926 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398937 | orchestrator | 2026-04-17 05:58:52.398947 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 05:58:52.398958 | orchestrator | Friday 17 April 2026 05:58:50 +0000 (0:00:00.139) 0:03:53.556 ********** 2026-04-17 05:58:52.398969 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.398979 | orchestrator | 2026-04-17 05:58:52.398990 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 05:58:52.399001 | orchestrator | Friday 17 April 2026 05:58:50 +0000 (0:00:00.149) 0:03:53.706 ********** 2026-04-17 05:58:52.399011 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.399022 | orchestrator | 2026-04-17 05:58:52.399033 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 05:58:52.399043 | orchestrator | Friday 17 April 2026 05:58:51 +0000 (0:00:00.130) 0:03:53.837 ********** 2026-04-17 05:58:52.399054 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.399065 | orchestrator | 2026-04-17 05:58:52.399075 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 05:58:52.399086 | orchestrator | Friday 17 April 2026 05:58:51 +0000 (0:00:00.135) 0:03:53.973 ********** 2026-04-17 05:58:52.399097 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.399107 | orchestrator | 2026-04-17 05:58:52.399118 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 05:58:52.399129 | orchestrator | Friday 17 April 2026 05:58:51 +0000 (0:00:00.145) 0:03:54.118 ********** 2026-04-17 05:58:52.399159 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.399170 | orchestrator | 2026-04-17 05:58:52.399181 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 05:58:52.399192 | orchestrator | Friday 17 April 2026 05:58:51 +0000 (0:00:00.137) 0:03:54.256 ********** 2026-04-17 05:58:52.399202 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.399213 | orchestrator | 2026-04-17 05:58:52.399224 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 05:58:52.399235 | orchestrator | Friday 17 April 2026 05:58:51 +0000 (0:00:00.130) 0:03:54.387 ********** 2026-04-17 05:58:52.399245 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.399256 | orchestrator | 2026-04-17 05:58:52.399267 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 05:58:52.399277 | orchestrator | Friday 17 April 2026 05:58:52 +0000 (0:00:00.481) 0:03:54.868 ********** 2026-04-17 05:58:52.399288 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.399299 | orchestrator | 2026-04-17 05:58:52.399309 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 05:58:52.399320 | orchestrator | Friday 17 April 2026 05:58:52 +0000 (0:00:00.132) 0:03:55.001 ********** 2026-04-17 05:58:52.399331 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:58:52.399342 | orchestrator | 2026-04-17 05:58:52.399352 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 05:58:52.399363 | orchestrator | Friday 17 April 2026 05:58:52 +0000 (0:00:00.133) 0:03:55.134 ********** 2026-04-17 05:59:11.827071 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827216 | orchestrator | 2026-04-17 05:59:11.827233 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 05:59:11.827246 | orchestrator | Friday 17 April 2026 05:58:52 +0000 (0:00:00.126) 0:03:55.260 ********** 2026-04-17 05:59:11.827257 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827268 | orchestrator | 2026-04-17 05:59:11.827279 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 05:59:11.827290 | orchestrator | Friday 17 April 2026 05:58:52 +0000 (0:00:00.247) 0:03:55.508 ********** 2026-04-17 05:59:11.827301 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:11.827312 | orchestrator | 2026-04-17 05:59:11.827323 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 05:59:11.827334 | orchestrator | Friday 17 April 2026 05:58:53 +0000 (0:00:00.960) 0:03:56.468 ********** 2026-04-17 05:59:11.827345 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:11.827355 | orchestrator | 2026-04-17 05:59:11.827366 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 05:59:11.827377 | orchestrator | Friday 17 April 2026 05:58:55 +0000 (0:00:01.400) 0:03:57.869 ********** 2026-04-17 05:59:11.827388 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-17 05:59:11.827399 | orchestrator | 2026-04-17 05:59:11.827410 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 05:59:11.827420 | orchestrator | Friday 17 April 2026 05:58:55 +0000 (0:00:00.595) 0:03:58.464 ********** 2026-04-17 05:59:11.827431 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827442 | orchestrator | 2026-04-17 05:59:11.827452 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 05:59:11.827463 | orchestrator | Friday 17 April 2026 05:58:55 +0000 (0:00:00.154) 0:03:58.618 ********** 2026-04-17 05:59:11.827474 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827484 | orchestrator | 2026-04-17 05:59:11.827495 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 05:59:11.827506 | orchestrator | Friday 17 April 2026 05:58:56 +0000 (0:00:00.136) 0:03:58.755 ********** 2026-04-17 05:59:11.827517 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 05:59:11.827529 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 05:59:11.827541 | orchestrator | 2026-04-17 05:59:11.827552 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 05:59:11.827562 | orchestrator | Friday 17 April 2026 05:58:56 +0000 (0:00:00.863) 0:03:59.618 ********** 2026-04-17 05:59:11.827573 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:11.827584 | orchestrator | 2026-04-17 05:59:11.827610 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 05:59:11.827623 | orchestrator | Friday 17 April 2026 05:58:58 +0000 (0:00:01.489) 0:04:01.108 ********** 2026-04-17 05:59:11.827635 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827647 | orchestrator | 2026-04-17 05:59:11.827659 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 05:59:11.827672 | orchestrator | Friday 17 April 2026 05:58:58 +0000 (0:00:00.160) 0:04:01.269 ********** 2026-04-17 05:59:11.827685 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827697 | orchestrator | 2026-04-17 05:59:11.827709 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 05:59:11.827722 | orchestrator | Friday 17 April 2026 05:58:58 +0000 (0:00:00.156) 0:04:01.425 ********** 2026-04-17 05:59:11.827734 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827746 | orchestrator | 2026-04-17 05:59:11.827759 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 05:59:11.827771 | orchestrator | Friday 17 April 2026 05:58:58 +0000 (0:00:00.152) 0:04:01.578 ********** 2026-04-17 05:59:11.827783 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-17 05:59:11.827815 | orchestrator | 2026-04-17 05:59:11.827828 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 05:59:11.827840 | orchestrator | Friday 17 April 2026 05:58:59 +0000 (0:00:00.666) 0:04:02.245 ********** 2026-04-17 05:59:11.827853 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:11.827865 | orchestrator | 2026-04-17 05:59:11.827877 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 05:59:11.827889 | orchestrator | Friday 17 April 2026 05:59:00 +0000 (0:00:00.746) 0:04:02.991 ********** 2026-04-17 05:59:11.827901 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 05:59:11.827914 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 05:59:11.827926 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 05:59:11.827939 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827951 | orchestrator | 2026-04-17 05:59:11.827965 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 05:59:11.827975 | orchestrator | Friday 17 April 2026 05:59:00 +0000 (0:00:00.144) 0:04:03.136 ********** 2026-04-17 05:59:11.827986 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.827997 | orchestrator | 2026-04-17 05:59:11.828007 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 05:59:11.828018 | orchestrator | Friday 17 April 2026 05:59:00 +0000 (0:00:00.124) 0:04:03.261 ********** 2026-04-17 05:59:11.828028 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828039 | orchestrator | 2026-04-17 05:59:11.828050 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 05:59:11.828060 | orchestrator | Friday 17 April 2026 05:59:00 +0000 (0:00:00.172) 0:04:03.433 ********** 2026-04-17 05:59:11.828071 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828081 | orchestrator | 2026-04-17 05:59:11.828092 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 05:59:11.828138 | orchestrator | Friday 17 April 2026 05:59:00 +0000 (0:00:00.150) 0:04:03.583 ********** 2026-04-17 05:59:11.828150 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828161 | orchestrator | 2026-04-17 05:59:11.828172 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 05:59:11.828183 | orchestrator | Friday 17 April 2026 05:59:01 +0000 (0:00:00.164) 0:04:03.748 ********** 2026-04-17 05:59:11.828193 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828204 | orchestrator | 2026-04-17 05:59:11.828215 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 05:59:11.828226 | orchestrator | Friday 17 April 2026 05:59:01 +0000 (0:00:00.162) 0:04:03.910 ********** 2026-04-17 05:59:11.828236 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:11.828247 | orchestrator | 2026-04-17 05:59:11.828258 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 05:59:11.828269 | orchestrator | Friday 17 April 2026 05:59:03 +0000 (0:00:01.925) 0:04:05.836 ********** 2026-04-17 05:59:11.828279 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:11.828290 | orchestrator | 2026-04-17 05:59:11.828301 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 05:59:11.828311 | orchestrator | Friday 17 April 2026 05:59:03 +0000 (0:00:00.146) 0:04:05.983 ********** 2026-04-17 05:59:11.828322 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-17 05:59:11.828333 | orchestrator | 2026-04-17 05:59:11.828344 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 05:59:11.828354 | orchestrator | Friday 17 April 2026 05:59:03 +0000 (0:00:00.627) 0:04:06.610 ********** 2026-04-17 05:59:11.828365 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828376 | orchestrator | 2026-04-17 05:59:11.828387 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 05:59:11.828397 | orchestrator | Friday 17 April 2026 05:59:04 +0000 (0:00:00.160) 0:04:06.771 ********** 2026-04-17 05:59:11.828416 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828427 | orchestrator | 2026-04-17 05:59:11.828438 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 05:59:11.828448 | orchestrator | Friday 17 April 2026 05:59:04 +0000 (0:00:00.158) 0:04:06.929 ********** 2026-04-17 05:59:11.828459 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828469 | orchestrator | 2026-04-17 05:59:11.828480 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 05:59:11.828491 | orchestrator | Friday 17 April 2026 05:59:04 +0000 (0:00:00.165) 0:04:07.094 ********** 2026-04-17 05:59:11.828502 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828512 | orchestrator | 2026-04-17 05:59:11.828523 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 05:59:11.828539 | orchestrator | Friday 17 April 2026 05:59:04 +0000 (0:00:00.173) 0:04:07.267 ********** 2026-04-17 05:59:11.828550 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828561 | orchestrator | 2026-04-17 05:59:11.828572 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 05:59:11.828583 | orchestrator | Friday 17 April 2026 05:59:04 +0000 (0:00:00.157) 0:04:07.425 ********** 2026-04-17 05:59:11.828593 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828604 | orchestrator | 2026-04-17 05:59:11.828615 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 05:59:11.828626 | orchestrator | Friday 17 April 2026 05:59:04 +0000 (0:00:00.151) 0:04:07.577 ********** 2026-04-17 05:59:11.828637 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828647 | orchestrator | 2026-04-17 05:59:11.828658 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 05:59:11.828669 | orchestrator | Friday 17 April 2026 05:59:04 +0000 (0:00:00.159) 0:04:07.737 ********** 2026-04-17 05:59:11.828680 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:11.828690 | orchestrator | 2026-04-17 05:59:11.828701 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 05:59:11.828712 | orchestrator | Friday 17 April 2026 05:59:05 +0000 (0:00:00.206) 0:04:07.943 ********** 2026-04-17 05:59:11.828723 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:11.828733 | orchestrator | 2026-04-17 05:59:11.828744 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 05:59:11.828755 | orchestrator | Friday 17 April 2026 05:59:05 +0000 (0:00:00.628) 0:04:08.572 ********** 2026-04-17 05:59:11.828766 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-17 05:59:11.828776 | orchestrator | 2026-04-17 05:59:11.828787 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 05:59:11.828798 | orchestrator | Friday 17 April 2026 05:59:06 +0000 (0:00:00.609) 0:04:09.182 ********** 2026-04-17 05:59:11.828808 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-17 05:59:11.828820 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-17 05:59:11.828831 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-17 05:59:11.828841 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-17 05:59:11.828852 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-17 05:59:11.828862 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-17 05:59:11.828873 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-17 05:59:11.828884 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-17 05:59:11.828896 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 05:59:11.828907 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 05:59:11.828918 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 05:59:11.828928 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 05:59:11.828946 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 05:59:11.828957 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 05:59:11.828973 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-17 05:59:26.590947 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-17 05:59:26.591054 | orchestrator | 2026-04-17 05:59:26.591068 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 05:59:26.591079 | orchestrator | Friday 17 April 2026 05:59:12 +0000 (0:00:05.807) 0:04:14.990 ********** 2026-04-17 05:59:26.591089 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591119 | orchestrator | 2026-04-17 05:59:26.591128 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 05:59:26.591137 | orchestrator | Friday 17 April 2026 05:59:12 +0000 (0:00:00.144) 0:04:15.134 ********** 2026-04-17 05:59:26.591146 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591155 | orchestrator | 2026-04-17 05:59:26.591164 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 05:59:26.591173 | orchestrator | Friday 17 April 2026 05:59:12 +0000 (0:00:00.159) 0:04:15.294 ********** 2026-04-17 05:59:26.591182 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591191 | orchestrator | 2026-04-17 05:59:26.591199 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 05:59:26.591208 | orchestrator | Friday 17 April 2026 05:59:12 +0000 (0:00:00.158) 0:04:15.453 ********** 2026-04-17 05:59:26.591217 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591225 | orchestrator | 2026-04-17 05:59:26.591234 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 05:59:26.591243 | orchestrator | Friday 17 April 2026 05:59:12 +0000 (0:00:00.155) 0:04:15.608 ********** 2026-04-17 05:59:26.591251 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591260 | orchestrator | 2026-04-17 05:59:26.591268 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 05:59:26.591277 | orchestrator | Friday 17 April 2026 05:59:13 +0000 (0:00:00.143) 0:04:15.752 ********** 2026-04-17 05:59:26.591286 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591294 | orchestrator | 2026-04-17 05:59:26.591303 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 05:59:26.591313 | orchestrator | Friday 17 April 2026 05:59:13 +0000 (0:00:00.149) 0:04:15.901 ********** 2026-04-17 05:59:26.591321 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591330 | orchestrator | 2026-04-17 05:59:26.591339 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 05:59:26.591347 | orchestrator | Friday 17 April 2026 05:59:13 +0000 (0:00:00.141) 0:04:16.043 ********** 2026-04-17 05:59:26.591356 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591364 | orchestrator | 2026-04-17 05:59:26.591373 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 05:59:26.591399 | orchestrator | Friday 17 April 2026 05:59:13 +0000 (0:00:00.133) 0:04:16.176 ********** 2026-04-17 05:59:26.591408 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591417 | orchestrator | 2026-04-17 05:59:26.591425 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 05:59:26.591434 | orchestrator | Friday 17 April 2026 05:59:13 +0000 (0:00:00.121) 0:04:16.297 ********** 2026-04-17 05:59:26.591443 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591451 | orchestrator | 2026-04-17 05:59:26.591460 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 05:59:26.591469 | orchestrator | Friday 17 April 2026 05:59:14 +0000 (0:00:00.536) 0:04:16.834 ********** 2026-04-17 05:59:26.591480 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591490 | orchestrator | 2026-04-17 05:59:26.591499 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 05:59:26.591530 | orchestrator | Friday 17 April 2026 05:59:14 +0000 (0:00:00.149) 0:04:16.983 ********** 2026-04-17 05:59:26.591541 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591551 | orchestrator | 2026-04-17 05:59:26.591561 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 05:59:26.591570 | orchestrator | Friday 17 April 2026 05:59:14 +0000 (0:00:00.137) 0:04:17.121 ********** 2026-04-17 05:59:26.591581 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591590 | orchestrator | 2026-04-17 05:59:26.591600 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 05:59:26.591610 | orchestrator | Friday 17 April 2026 05:59:14 +0000 (0:00:00.232) 0:04:17.354 ********** 2026-04-17 05:59:26.591619 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591629 | orchestrator | 2026-04-17 05:59:26.591639 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 05:59:26.591648 | orchestrator | Friday 17 April 2026 05:59:14 +0000 (0:00:00.134) 0:04:17.488 ********** 2026-04-17 05:59:26.591658 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591669 | orchestrator | 2026-04-17 05:59:26.591679 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 05:59:26.591689 | orchestrator | Friday 17 April 2026 05:59:15 +0000 (0:00:00.263) 0:04:17.752 ********** 2026-04-17 05:59:26.591697 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591706 | orchestrator | 2026-04-17 05:59:26.591715 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 05:59:26.591724 | orchestrator | Friday 17 April 2026 05:59:15 +0000 (0:00:00.152) 0:04:17.904 ********** 2026-04-17 05:59:26.591732 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591741 | orchestrator | 2026-04-17 05:59:26.591750 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 05:59:26.591761 | orchestrator | Friday 17 April 2026 05:59:15 +0000 (0:00:00.138) 0:04:18.043 ********** 2026-04-17 05:59:26.591769 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591778 | orchestrator | 2026-04-17 05:59:26.591786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 05:59:26.591795 | orchestrator | Friday 17 April 2026 05:59:15 +0000 (0:00:00.162) 0:04:18.206 ********** 2026-04-17 05:59:26.591804 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591814 | orchestrator | 2026-04-17 05:59:26.591837 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 05:59:26.591847 | orchestrator | Friday 17 April 2026 05:59:15 +0000 (0:00:00.146) 0:04:18.352 ********** 2026-04-17 05:59:26.591856 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591865 | orchestrator | 2026-04-17 05:59:26.591873 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 05:59:26.591882 | orchestrator | Friday 17 April 2026 05:59:15 +0000 (0:00:00.161) 0:04:18.513 ********** 2026-04-17 05:59:26.591891 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591900 | orchestrator | 2026-04-17 05:59:26.591908 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 05:59:26.591917 | orchestrator | Friday 17 April 2026 05:59:15 +0000 (0:00:00.138) 0:04:18.652 ********** 2026-04-17 05:59:26.591926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 05:59:26.591935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 05:59:26.591944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 05:59:26.591952 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.591961 | orchestrator | 2026-04-17 05:59:26.591970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 05:59:26.591979 | orchestrator | Friday 17 April 2026 05:59:16 +0000 (0:00:00.803) 0:04:19.455 ********** 2026-04-17 05:59:26.591988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 05:59:26.591996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 05:59:26.592011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 05:59:26.592020 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.592029 | orchestrator | 2026-04-17 05:59:26.592037 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 05:59:26.592046 | orchestrator | Friday 17 April 2026 05:59:17 +0000 (0:00:00.854) 0:04:20.309 ********** 2026-04-17 05:59:26.592055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 05:59:26.592063 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 05:59:26.592072 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 05:59:26.592080 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.592089 | orchestrator | 2026-04-17 05:59:26.592136 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 05:59:26.592145 | orchestrator | Friday 17 April 2026 05:59:18 +0000 (0:00:01.196) 0:04:21.505 ********** 2026-04-17 05:59:26.592154 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.592163 | orchestrator | 2026-04-17 05:59:26.592171 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 05:59:26.592185 | orchestrator | Friday 17 April 2026 05:59:18 +0000 (0:00:00.152) 0:04:21.658 ********** 2026-04-17 05:59:26.592194 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-17 05:59:26.592203 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.592211 | orchestrator | 2026-04-17 05:59:26.592220 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 05:59:26.592229 | orchestrator | Friday 17 April 2026 05:59:19 +0000 (0:00:00.670) 0:04:22.328 ********** 2026-04-17 05:59:26.592238 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:59:26.592246 | orchestrator | 2026-04-17 05:59:26.592255 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-17 05:59:26.592264 | orchestrator | Friday 17 April 2026 05:59:20 +0000 (0:00:00.950) 0:04:23.279 ********** 2026-04-17 05:59:26.592272 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:26.592281 | orchestrator | 2026-04-17 05:59:26.592290 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-17 05:59:26.592298 | orchestrator | Friday 17 April 2026 05:59:20 +0000 (0:00:00.170) 0:04:23.450 ********** 2026-04-17 05:59:26.592307 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-04-17 05:59:26.592317 | orchestrator | 2026-04-17 05:59:26.592325 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-17 05:59:26.592334 | orchestrator | Friday 17 April 2026 05:59:21 +0000 (0:00:00.660) 0:04:24.110 ********** 2026-04-17 05:59:26.592342 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-17 05:59:26.592351 | orchestrator | 2026-04-17 05:59:26.592360 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-17 05:59:26.592369 | orchestrator | Friday 17 April 2026 05:59:23 +0000 (0:00:02.175) 0:04:26.286 ********** 2026-04-17 05:59:26.592377 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:26.592386 | orchestrator | 2026-04-17 05:59:26.592394 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-17 05:59:26.592403 | orchestrator | Friday 17 April 2026 05:59:23 +0000 (0:00:00.186) 0:04:26.473 ********** 2026-04-17 05:59:26.592412 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:26.592420 | orchestrator | 2026-04-17 05:59:26.592429 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-17 05:59:26.592437 | orchestrator | Friday 17 April 2026 05:59:23 +0000 (0:00:00.182) 0:04:26.655 ********** 2026-04-17 05:59:26.592446 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:26.592454 | orchestrator | 2026-04-17 05:59:26.592463 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-17 05:59:26.592472 | orchestrator | Friday 17 April 2026 05:59:24 +0000 (0:00:00.498) 0:04:27.153 ********** 2026-04-17 05:59:26.592480 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:59:26.592495 | orchestrator | 2026-04-17 05:59:26.592504 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-17 05:59:26.592512 | orchestrator | Friday 17 April 2026 05:59:25 +0000 (0:00:01.041) 0:04:28.194 ********** 2026-04-17 05:59:26.592521 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:26.592529 | orchestrator | 2026-04-17 05:59:26.592538 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-17 05:59:26.592546 | orchestrator | Friday 17 April 2026 05:59:26 +0000 (0:00:00.595) 0:04:28.790 ********** 2026-04-17 05:59:26.592555 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:26.592564 | orchestrator | 2026-04-17 05:59:26.592577 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-17 05:59:59.108296 | orchestrator | Friday 17 April 2026 05:59:26 +0000 (0:00:00.537) 0:04:29.328 ********** 2026-04-17 05:59:59.108418 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.108436 | orchestrator | 2026-04-17 05:59:59.108449 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-17 05:59:59.108462 | orchestrator | Friday 17 April 2026 05:59:27 +0000 (0:00:00.554) 0:04:29.882 ********** 2026-04-17 05:59:59.108474 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.108485 | orchestrator | 2026-04-17 05:59:59.108497 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-17 05:59:59.108509 | orchestrator | Friday 17 April 2026 05:59:27 +0000 (0:00:00.729) 0:04:30.612 ********** 2026-04-17 05:59:59.108520 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.108531 | orchestrator | 2026-04-17 05:59:59.108542 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-17 05:59:59.108554 | orchestrator | Friday 17 April 2026 05:59:28 +0000 (0:00:00.702) 0:04:31.314 ********** 2026-04-17 05:59:59.108565 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 05:59:59.108577 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 05:59:59.108588 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 05:59:59.108600 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-17 05:59:59.108611 | orchestrator | 2026-04-17 05:59:59.108622 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-17 05:59:59.108634 | orchestrator | Friday 17 April 2026 05:59:31 +0000 (0:00:03.013) 0:04:34.328 ********** 2026-04-17 05:59:59.108645 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:59:59.108656 | orchestrator | 2026-04-17 05:59:59.108667 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-17 05:59:59.108679 | orchestrator | Friday 17 April 2026 05:59:32 +0000 (0:00:01.111) 0:04:35.439 ********** 2026-04-17 05:59:59.108690 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.108701 | orchestrator | 2026-04-17 05:59:59.108713 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-17 05:59:59.108724 | orchestrator | Friday 17 April 2026 05:59:32 +0000 (0:00:00.139) 0:04:35.578 ********** 2026-04-17 05:59:59.108735 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.108747 | orchestrator | 2026-04-17 05:59:59.108758 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-17 05:59:59.108769 | orchestrator | Friday 17 April 2026 05:59:32 +0000 (0:00:00.163) 0:04:35.742 ********** 2026-04-17 05:59:59.108781 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.108792 | orchestrator | 2026-04-17 05:59:59.108803 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-17 05:59:59.108832 | orchestrator | Friday 17 April 2026 05:59:34 +0000 (0:00:01.167) 0:04:36.909 ********** 2026-04-17 05:59:59.108846 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.108859 | orchestrator | 2026-04-17 05:59:59.108872 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-17 05:59:59.108885 | orchestrator | Friday 17 April 2026 05:59:34 +0000 (0:00:00.472) 0:04:37.382 ********** 2026-04-17 05:59:59.108898 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:59.108911 | orchestrator | 2026-04-17 05:59:59.108947 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-17 05:59:59.108959 | orchestrator | Friday 17 April 2026 05:59:34 +0000 (0:00:00.152) 0:04:37.534 ********** 2026-04-17 05:59:59.108971 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-04-17 05:59:59.108983 | orchestrator | 2026-04-17 05:59:59.108994 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-17 05:59:59.109006 | orchestrator | Friday 17 April 2026 05:59:35 +0000 (0:00:01.102) 0:04:38.636 ********** 2026-04-17 05:59:59.109017 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:59.109028 | orchestrator | 2026-04-17 05:59:59.109039 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-17 05:59:59.109050 | orchestrator | Friday 17 April 2026 05:59:36 +0000 (0:00:00.145) 0:04:38.782 ********** 2026-04-17 05:59:59.109062 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:59.109106 | orchestrator | 2026-04-17 05:59:59.109117 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-17 05:59:59.109128 | orchestrator | Friday 17 April 2026 05:59:36 +0000 (0:00:00.152) 0:04:38.934 ********** 2026-04-17 05:59:59.109139 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-04-17 05:59:59.109149 | orchestrator | 2026-04-17 05:59:59.109160 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-17 05:59:59.109171 | orchestrator | Friday 17 April 2026 05:59:36 +0000 (0:00:00.617) 0:04:39.552 ********** 2026-04-17 05:59:59.109182 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.109193 | orchestrator | 2026-04-17 05:59:59.109204 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-17 05:59:59.109214 | orchestrator | Friday 17 April 2026 05:59:38 +0000 (0:00:01.374) 0:04:40.926 ********** 2026-04-17 05:59:59.109225 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.109235 | orchestrator | 2026-04-17 05:59:59.109246 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-17 05:59:59.109256 | orchestrator | Friday 17 April 2026 05:59:39 +0000 (0:00:00.953) 0:04:41.880 ********** 2026-04-17 05:59:59.109267 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.109278 | orchestrator | 2026-04-17 05:59:59.109288 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-17 05:59:59.109299 | orchestrator | Friday 17 April 2026 05:59:40 +0000 (0:00:01.463) 0:04:43.343 ********** 2026-04-17 05:59:59.109310 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:59:59.109320 | orchestrator | 2026-04-17 05:59:59.109331 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-17 05:59:59.109342 | orchestrator | Friday 17 April 2026 05:59:42 +0000 (0:00:02.163) 0:04:45.507 ********** 2026-04-17 05:59:59.109352 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-04-17 05:59:59.109363 | orchestrator | 2026-04-17 05:59:59.109390 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-17 05:59:59.109402 | orchestrator | Friday 17 April 2026 05:59:43 +0000 (0:00:00.637) 0:04:46.144 ********** 2026-04-17 05:59:59.109413 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.109423 | orchestrator | 2026-04-17 05:59:59.109434 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-17 05:59:59.109445 | orchestrator | Friday 17 April 2026 05:59:44 +0000 (0:00:01.208) 0:04:47.353 ********** 2026-04-17 05:59:59.109455 | orchestrator | ok: [testbed-node-0] 2026-04-17 05:59:59.109466 | orchestrator | 2026-04-17 05:59:59.109476 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-17 05:59:59.109487 | orchestrator | Friday 17 April 2026 05:59:46 +0000 (0:00:02.318) 0:04:49.672 ********** 2026-04-17 05:59:59.109497 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:59.109508 | orchestrator | 2026-04-17 05:59:59.109519 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-17 05:59:59.109530 | orchestrator | Friday 17 April 2026 05:59:47 +0000 (0:00:00.137) 0:04:49.809 ********** 2026-04-17 05:59:59.109554 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-17 05:59:59.109568 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-17 05:59:59.109579 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-17 05:59:59.109591 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-17 05:59:59.109603 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-17 05:59:59.109616 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}])  2026-04-17 05:59:59.109629 | orchestrator | 2026-04-17 05:59:59.109640 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-17 05:59:59.109651 | orchestrator | Friday 17 April 2026 05:59:55 +0000 (0:00:08.695) 0:04:58.505 ********** 2026-04-17 05:59:59.109661 | orchestrator | changed: [testbed-node-0] 2026-04-17 05:59:59.109672 | orchestrator | 2026-04-17 05:59:59.109683 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 05:59:59.109693 | orchestrator | Friday 17 April 2026 05:59:57 +0000 (0:00:01.626) 0:05:00.132 ********** 2026-04-17 05:59:59.109704 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 05:59:59.109715 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 05:59:59.109726 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 05:59:59.109737 | orchestrator | 2026-04-17 05:59:59.109747 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 05:59:59.109758 | orchestrator | Friday 17 April 2026 05:59:58 +0000 (0:00:01.218) 0:05:01.351 ********** 2026-04-17 05:59:59.109769 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 05:59:59.109780 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 05:59:59.109790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 05:59:59.109801 | orchestrator | skipping: [testbed-node-0] 2026-04-17 05:59:59.109812 | orchestrator | 2026-04-17 05:59:59.109823 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-17 05:59:59.109839 | orchestrator | Friday 17 April 2026 05:59:59 +0000 (0:00:00.493) 0:05:01.844 ********** 2026-04-17 06:00:11.035535 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:00:11.035632 | orchestrator | 2026-04-17 06:00:11.035650 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-17 06:00:11.035663 | orchestrator | Friday 17 April 2026 05:59:59 +0000 (0:00:00.135) 0:05:01.980 ********** 2026-04-17 06:00:11.035674 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:00:11.035686 | orchestrator | 2026-04-17 06:00:11.035697 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 06:00:11.035708 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 06:00:11.035718 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 06:00:11.035779 | orchestrator | Friday 17 April 2026 06:00:00 +0000 (0:00:01.426) 0:05:03.406 ********** 2026-04-17 06:00:11.035790 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:00:11.035801 | orchestrator | 2026-04-17 06:00:11.035811 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 06:00:11.035822 | orchestrator | Friday 17 April 2026 06:00:00 +0000 (0:00:00.134) 0:05:03.541 ********** 2026-04-17 06:00:11.035833 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:00:11.035844 | orchestrator | 2026-04-17 06:00:11.035855 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 06:00:11.035866 | orchestrator | Friday 17 April 2026 06:00:00 +0000 (0:00:00.141) 0:05:03.682 ********** 2026-04-17 06:00:11.035877 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:00:11.035887 | orchestrator | 2026-04-17 06:00:11.035898 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 06:00:11.035910 | orchestrator | Friday 17 April 2026 06:00:01 +0000 (0:00:00.481) 0:05:04.164 ********** 2026-04-17 06:00:11.035921 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:00:11.035932 | orchestrator | 2026-04-17 06:00:11.035943 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 06:00:11.035954 | orchestrator | Friday 17 April 2026 06:00:01 +0000 (0:00:00.144) 0:05:04.309 ********** 2026-04-17 06:00:11.035965 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:00:11.035976 | orchestrator | 2026-04-17 06:00:11.035987 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-17 06:00:11.035997 | orchestrator | Friday 17 April 2026 06:00:01 +0000 (0:00:00.155) 0:05:04.464 ********** 2026-04-17 06:00:11.036008 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:00:11.036019 | orchestrator | 2026-04-17 06:00:11.036030 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 06:00:11.036041 | orchestrator | Friday 17 April 2026 06:00:01 +0000 (0:00:00.140) 0:05:04.604 ********** 2026-04-17 06:00:11.036118 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:00:11.036133 | orchestrator | 2026-04-17 06:00:11.036147 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-17 06:00:11.036160 | orchestrator | 2026-04-17 06:00:11.036173 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-17 06:00:11.036186 | orchestrator | Friday 17 April 2026 06:00:02 +0000 (0:00:00.662) 0:05:05.267 ********** 2026-04-17 06:00:11.036199 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036212 | orchestrator | 2026-04-17 06:00:11.036224 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-17 06:00:11.036238 | orchestrator | Friday 17 April 2026 06:00:02 +0000 (0:00:00.447) 0:05:05.714 ********** 2026-04-17 06:00:11.036251 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036263 | orchestrator | 2026-04-17 06:00:11.036277 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-17 06:00:11.036290 | orchestrator | Friday 17 April 2026 06:00:03 +0000 (0:00:00.161) 0:05:05.875 ********** 2026-04-17 06:00:11.036302 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:11.036331 | orchestrator | 2026-04-17 06:00:11.036342 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-17 06:00:11.036353 | orchestrator | Friday 17 April 2026 06:00:03 +0000 (0:00:00.151) 0:05:06.027 ********** 2026-04-17 06:00:11.036364 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036375 | orchestrator | 2026-04-17 06:00:11.036386 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:00:11.036397 | orchestrator | Friday 17 April 2026 06:00:03 +0000 (0:00:00.149) 0:05:06.176 ********** 2026-04-17 06:00:11.036407 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-17 06:00:11.036418 | orchestrator | 2026-04-17 06:00:11.036429 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:00:11.036440 | orchestrator | Friday 17 April 2026 06:00:03 +0000 (0:00:00.240) 0:05:06.417 ********** 2026-04-17 06:00:11.036451 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036461 | orchestrator | 2026-04-17 06:00:11.036472 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:00:11.036483 | orchestrator | Friday 17 April 2026 06:00:04 +0000 (0:00:00.531) 0:05:06.949 ********** 2026-04-17 06:00:11.036493 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036504 | orchestrator | 2026-04-17 06:00:11.036515 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:00:11.036525 | orchestrator | Friday 17 April 2026 06:00:04 +0000 (0:00:00.518) 0:05:07.467 ********** 2026-04-17 06:00:11.036536 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036547 | orchestrator | 2026-04-17 06:00:11.036557 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:00:11.036568 | orchestrator | Friday 17 April 2026 06:00:05 +0000 (0:00:00.502) 0:05:07.970 ********** 2026-04-17 06:00:11.036578 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036589 | orchestrator | 2026-04-17 06:00:11.036600 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:00:11.036610 | orchestrator | Friday 17 April 2026 06:00:05 +0000 (0:00:00.221) 0:05:08.192 ********** 2026-04-17 06:00:11.036621 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036632 | orchestrator | 2026-04-17 06:00:11.036642 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:00:11.036669 | orchestrator | Friday 17 April 2026 06:00:05 +0000 (0:00:00.194) 0:05:08.387 ********** 2026-04-17 06:00:11.036681 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036692 | orchestrator | 2026-04-17 06:00:11.036702 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:00:11.036713 | orchestrator | Friday 17 April 2026 06:00:05 +0000 (0:00:00.175) 0:05:08.562 ********** 2026-04-17 06:00:11.036724 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:11.036735 | orchestrator | 2026-04-17 06:00:11.036745 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:00:11.036756 | orchestrator | Friday 17 April 2026 06:00:05 +0000 (0:00:00.170) 0:05:08.732 ********** 2026-04-17 06:00:11.036767 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036777 | orchestrator | 2026-04-17 06:00:11.036788 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:00:11.036799 | orchestrator | Friday 17 April 2026 06:00:06 +0000 (0:00:00.171) 0:05:08.904 ********** 2026-04-17 06:00:11.036810 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:00:11.036820 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:00:11.036832 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:00:11.036842 | orchestrator | 2026-04-17 06:00:11.036853 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:00:11.036864 | orchestrator | Friday 17 April 2026 06:00:06 +0000 (0:00:00.735) 0:05:09.640 ********** 2026-04-17 06:00:11.036875 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:11.036885 | orchestrator | 2026-04-17 06:00:11.036903 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:00:11.036914 | orchestrator | Friday 17 April 2026 06:00:07 +0000 (0:00:00.260) 0:05:09.900 ********** 2026-04-17 06:00:11.036925 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:00:11.036935 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:00:11.036946 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:00:11.036956 | orchestrator | 2026-04-17 06:00:11.036967 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:00:11.036978 | orchestrator | Friday 17 April 2026 06:00:09 +0000 (0:00:02.281) 0:05:12.182 ********** 2026-04-17 06:00:11.036988 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 06:00:11.037000 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 06:00:11.037010 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 06:00:11.037021 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:11.037031 | orchestrator | 2026-04-17 06:00:11.037084 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:00:11.037096 | orchestrator | Friday 17 April 2026 06:00:09 +0000 (0:00:00.419) 0:05:12.601 ********** 2026-04-17 06:00:11.037109 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:00:11.037122 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:00:11.037133 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:00:11.037144 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:11.037155 | orchestrator | 2026-04-17 06:00:11.037166 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:00:11.037177 | orchestrator | Friday 17 April 2026 06:00:10 +0000 (0:00:01.087) 0:05:13.689 ********** 2026-04-17 06:00:11.037189 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:11.037202 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:11.037222 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:15.528125 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.528258 | orchestrator | 2026-04-17 06:00:15.528282 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:00:15.528302 | orchestrator | Friday 17 April 2026 06:00:11 +0000 (0:00:00.173) 0:05:13.863 ********** 2026-04-17 06:00:15.528358 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:00:07.750279', 'end': '2026-04-17 06:00:07.800895', 'delta': '0:00:00.050616', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:00:15.528384 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9f8a3fd74f0b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:00:08.311559', 'end': '2026-04-17 06:00:08.359001', 'delta': '0:00:00.047442', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9f8a3fd74f0b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:00:15.528424 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f2e2f728469b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:00:09.239153', 'end': '2026-04-17 06:00:09.291073', 'delta': '0:00:00.051920', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2e2f728469b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:00:15.528446 | orchestrator | 2026-04-17 06:00:15.528465 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:00:15.528482 | orchestrator | Friday 17 April 2026 06:00:11 +0000 (0:00:00.576) 0:05:14.440 ********** 2026-04-17 06:00:15.528499 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:15.528517 | orchestrator | 2026-04-17 06:00:15.528537 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:00:15.528555 | orchestrator | Friday 17 April 2026 06:00:11 +0000 (0:00:00.267) 0:05:14.707 ********** 2026-04-17 06:00:15.528573 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.528590 | orchestrator | 2026-04-17 06:00:15.528610 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:00:15.528630 | orchestrator | Friday 17 April 2026 06:00:12 +0000 (0:00:00.312) 0:05:15.020 ********** 2026-04-17 06:00:15.528649 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:15.528669 | orchestrator | 2026-04-17 06:00:15.528685 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:00:15.528703 | orchestrator | Friday 17 April 2026 06:00:12 +0000 (0:00:00.152) 0:05:15.172 ********** 2026-04-17 06:00:15.528720 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:00:15.528737 | orchestrator | 2026-04-17 06:00:15.528755 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:00:15.528772 | orchestrator | Friday 17 April 2026 06:00:13 +0000 (0:00:01.007) 0:05:16.180 ********** 2026-04-17 06:00:15.528787 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:15.528803 | orchestrator | 2026-04-17 06:00:15.528822 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:00:15.528856 | orchestrator | Friday 17 April 2026 06:00:13 +0000 (0:00:00.167) 0:05:16.348 ********** 2026-04-17 06:00:15.528875 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.528892 | orchestrator | 2026-04-17 06:00:15.528910 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:00:15.528927 | orchestrator | Friday 17 April 2026 06:00:13 +0000 (0:00:00.139) 0:05:16.487 ********** 2026-04-17 06:00:15.528946 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.528963 | orchestrator | 2026-04-17 06:00:15.528980 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:00:15.528997 | orchestrator | Friday 17 April 2026 06:00:13 +0000 (0:00:00.244) 0:05:16.731 ********** 2026-04-17 06:00:15.529014 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.529033 | orchestrator | 2026-04-17 06:00:15.529113 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:00:15.529135 | orchestrator | Friday 17 April 2026 06:00:14 +0000 (0:00:00.141) 0:05:16.873 ********** 2026-04-17 06:00:15.529154 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.529172 | orchestrator | 2026-04-17 06:00:15.529192 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:00:15.529209 | orchestrator | Friday 17 April 2026 06:00:14 +0000 (0:00:00.148) 0:05:17.022 ********** 2026-04-17 06:00:15.529226 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.529244 | orchestrator | 2026-04-17 06:00:15.529263 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:00:15.529280 | orchestrator | Friday 17 April 2026 06:00:14 +0000 (0:00:00.141) 0:05:17.164 ********** 2026-04-17 06:00:15.529298 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.529316 | orchestrator | 2026-04-17 06:00:15.529332 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:00:15.529347 | orchestrator | Friday 17 April 2026 06:00:14 +0000 (0:00:00.171) 0:05:17.335 ********** 2026-04-17 06:00:15.529365 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.529383 | orchestrator | 2026-04-17 06:00:15.529398 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:00:15.529415 | orchestrator | Friday 17 April 2026 06:00:14 +0000 (0:00:00.133) 0:05:17.469 ********** 2026-04-17 06:00:15.529432 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.529449 | orchestrator | 2026-04-17 06:00:15.529467 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:00:15.529487 | orchestrator | Friday 17 April 2026 06:00:14 +0000 (0:00:00.148) 0:05:17.617 ********** 2026-04-17 06:00:15.529505 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.529523 | orchestrator | 2026-04-17 06:00:15.529542 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:00:15.529560 | orchestrator | Friday 17 April 2026 06:00:15 +0000 (0:00:00.505) 0:05:18.123 ********** 2026-04-17 06:00:15.529593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:00:15.529616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:00:15.529636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:00:15.529672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:00:15.529692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:00:15.529712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:00:15.529747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:00:15.784818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41525a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:00:15.784937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:00:15.784954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:00:15.784965 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:15.784977 | orchestrator | 2026-04-17 06:00:15.784987 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:00:15.784998 | orchestrator | Friday 17 April 2026 06:00:15 +0000 (0:00:00.276) 0:05:18.400 ********** 2026-04-17 06:00:15.785010 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:15.785038 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:15.785084 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:15.785103 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:15.785121 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:15.785130 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:15.785140 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:15.785166 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41525a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:30.448199 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:30.448281 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:00:30.448288 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448294 | orchestrator | 2026-04-17 06:00:30.448299 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:00:30.448304 | orchestrator | Friday 17 April 2026 06:00:15 +0000 (0:00:00.266) 0:05:18.667 ********** 2026-04-17 06:00:30.448308 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:30.448312 | orchestrator | 2026-04-17 06:00:30.448317 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:00:30.448321 | orchestrator | Friday 17 April 2026 06:00:16 +0000 (0:00:00.486) 0:05:19.154 ********** 2026-04-17 06:00:30.448325 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:30.448328 | orchestrator | 2026-04-17 06:00:30.448332 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:00:30.448336 | orchestrator | Friday 17 April 2026 06:00:16 +0000 (0:00:00.130) 0:05:19.285 ********** 2026-04-17 06:00:30.448340 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:30.448343 | orchestrator | 2026-04-17 06:00:30.448347 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:00:30.448351 | orchestrator | Friday 17 April 2026 06:00:17 +0000 (0:00:00.487) 0:05:19.773 ********** 2026-04-17 06:00:30.448355 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448359 | orchestrator | 2026-04-17 06:00:30.448362 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:00:30.448366 | orchestrator | Friday 17 April 2026 06:00:17 +0000 (0:00:00.134) 0:05:19.907 ********** 2026-04-17 06:00:30.448370 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448374 | orchestrator | 2026-04-17 06:00:30.448377 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:00:30.448381 | orchestrator | Friday 17 April 2026 06:00:17 +0000 (0:00:00.252) 0:05:20.160 ********** 2026-04-17 06:00:30.448385 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448389 | orchestrator | 2026-04-17 06:00:30.448392 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:00:30.448396 | orchestrator | Friday 17 April 2026 06:00:17 +0000 (0:00:00.176) 0:05:20.336 ********** 2026-04-17 06:00:30.448400 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-17 06:00:30.448405 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:00:30.448421 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-17 06:00:30.448425 | orchestrator | 2026-04-17 06:00:30.448429 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:00:30.448432 | orchestrator | Friday 17 April 2026 06:00:18 +0000 (0:00:01.178) 0:05:21.514 ********** 2026-04-17 06:00:30.448436 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 06:00:30.448440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 06:00:30.448444 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 06:00:30.448448 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448452 | orchestrator | 2026-04-17 06:00:30.448455 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:00:30.448459 | orchestrator | Friday 17 April 2026 06:00:18 +0000 (0:00:00.178) 0:05:21.692 ********** 2026-04-17 06:00:30.448463 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448467 | orchestrator | 2026-04-17 06:00:30.448470 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:00:30.448474 | orchestrator | Friday 17 April 2026 06:00:19 +0000 (0:00:00.142) 0:05:21.835 ********** 2026-04-17 06:00:30.448487 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:00:30.448492 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:00:30.448495 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:00:30.448499 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:00:30.448503 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:00:30.448507 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:00:30.448519 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:00:30.448523 | orchestrator | 2026-04-17 06:00:30.448527 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:00:30.448531 | orchestrator | Friday 17 April 2026 06:00:20 +0000 (0:00:01.253) 0:05:23.089 ********** 2026-04-17 06:00:30.448534 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:00:30.448538 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:00:30.448542 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:00:30.448546 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:00:30.448550 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:00:30.448553 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:00:30.448557 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:00:30.448561 | orchestrator | 2026-04-17 06:00:30.448564 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-17 06:00:30.448568 | orchestrator | Friday 17 April 2026 06:00:22 +0000 (0:00:02.522) 0:05:25.611 ********** 2026-04-17 06:00:30.448572 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448576 | orchestrator | 2026-04-17 06:00:30.448579 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-17 06:00:30.448583 | orchestrator | Friday 17 April 2026 06:00:23 +0000 (0:00:00.286) 0:05:25.898 ********** 2026-04-17 06:00:30.448587 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448591 | orchestrator | 2026-04-17 06:00:30.448594 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-17 06:00:30.448598 | orchestrator | Friday 17 April 2026 06:00:23 +0000 (0:00:00.244) 0:05:26.143 ********** 2026-04-17 06:00:30.448602 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448606 | orchestrator | 2026-04-17 06:00:30.448614 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-17 06:00:30.448618 | orchestrator | Friday 17 April 2026 06:00:23 +0000 (0:00:00.139) 0:05:26.282 ********** 2026-04-17 06:00:30.448622 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448626 | orchestrator | 2026-04-17 06:00:30.448629 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-17 06:00:30.448633 | orchestrator | Friday 17 April 2026 06:00:23 +0000 (0:00:00.270) 0:05:26.553 ********** 2026-04-17 06:00:30.448637 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448640 | orchestrator | 2026-04-17 06:00:30.448644 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-17 06:00:30.448648 | orchestrator | Friday 17 April 2026 06:00:23 +0000 (0:00:00.157) 0:05:26.711 ********** 2026-04-17 06:00:30.448651 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 06:00:30.448655 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 06:00:30.448659 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 06:00:30.448662 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448666 | orchestrator | 2026-04-17 06:00:30.448670 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-17 06:00:30.448673 | orchestrator | Friday 17 April 2026 06:00:24 +0000 (0:00:00.480) 0:05:27.192 ********** 2026-04-17 06:00:30.448677 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-17 06:00:30.448681 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-17 06:00:30.448685 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-17 06:00:30.448688 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-17 06:00:30.448692 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-17 06:00:30.448695 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-17 06:00:30.448699 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:30.448703 | orchestrator | 2026-04-17 06:00:30.448706 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-17 06:00:30.448710 | orchestrator | Friday 17 April 2026 06:00:25 +0000 (0:00:01.119) 0:05:28.312 ********** 2026-04-17 06:00:30.448714 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:00:30.448718 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:00:30.448721 | orchestrator | 2026-04-17 06:00:30.448725 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-17 06:00:30.448729 | orchestrator | Friday 17 April 2026 06:00:28 +0000 (0:00:02.606) 0:05:30.918 ********** 2026-04-17 06:00:30.448732 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:00:30.448736 | orchestrator | 2026-04-17 06:00:30.448740 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:00:30.448743 | orchestrator | Friday 17 April 2026 06:00:29 +0000 (0:00:01.486) 0:05:32.405 ********** 2026-04-17 06:00:30.448750 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-17 06:00:30.448753 | orchestrator | 2026-04-17 06:00:30.448757 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:00:30.448761 | orchestrator | Friday 17 April 2026 06:00:29 +0000 (0:00:00.200) 0:05:32.606 ********** 2026-04-17 06:00:30.448765 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-17 06:00:30.448770 | orchestrator | 2026-04-17 06:00:30.448774 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:00:30.448780 | orchestrator | Friday 17 April 2026 06:00:30 +0000 (0:00:00.579) 0:05:33.186 ********** 2026-04-17 06:00:42.840434 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.840568 | orchestrator | 2026-04-17 06:00:42.840588 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:00:42.840632 | orchestrator | Friday 17 April 2026 06:00:31 +0000 (0:00:00.560) 0:05:33.747 ********** 2026-04-17 06:00:42.840650 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.840668 | orchestrator | 2026-04-17 06:00:42.840684 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:00:42.840702 | orchestrator | Friday 17 April 2026 06:00:31 +0000 (0:00:00.141) 0:05:33.888 ********** 2026-04-17 06:00:42.840712 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.840722 | orchestrator | 2026-04-17 06:00:42.840731 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:00:42.840741 | orchestrator | Friday 17 April 2026 06:00:31 +0000 (0:00:00.187) 0:05:34.076 ********** 2026-04-17 06:00:42.840751 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.840768 | orchestrator | 2026-04-17 06:00:42.840785 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:00:42.840801 | orchestrator | Friday 17 April 2026 06:00:31 +0000 (0:00:00.156) 0:05:34.233 ********** 2026-04-17 06:00:42.840816 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.840832 | orchestrator | 2026-04-17 06:00:42.840849 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:00:42.840863 | orchestrator | Friday 17 April 2026 06:00:32 +0000 (0:00:00.577) 0:05:34.810 ********** 2026-04-17 06:00:42.840878 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.840892 | orchestrator | 2026-04-17 06:00:42.840907 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:00:42.840924 | orchestrator | Friday 17 April 2026 06:00:32 +0000 (0:00:00.142) 0:05:34.953 ********** 2026-04-17 06:00:42.840940 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.840959 | orchestrator | 2026-04-17 06:00:42.840976 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:00:42.840992 | orchestrator | Friday 17 April 2026 06:00:32 +0000 (0:00:00.141) 0:05:35.094 ********** 2026-04-17 06:00:42.841010 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.841024 | orchestrator | 2026-04-17 06:00:42.841036 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:00:42.841047 | orchestrator | Friday 17 April 2026 06:00:32 +0000 (0:00:00.603) 0:05:35.698 ********** 2026-04-17 06:00:42.841058 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.841069 | orchestrator | 2026-04-17 06:00:42.841111 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:00:42.841131 | orchestrator | Friday 17 April 2026 06:00:33 +0000 (0:00:00.596) 0:05:36.295 ********** 2026-04-17 06:00:42.841148 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841169 | orchestrator | 2026-04-17 06:00:42.841187 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:00:42.841204 | orchestrator | Friday 17 April 2026 06:00:33 +0000 (0:00:00.148) 0:05:36.443 ********** 2026-04-17 06:00:42.841220 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.841238 | orchestrator | 2026-04-17 06:00:42.841255 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:00:42.841271 | orchestrator | Friday 17 April 2026 06:00:33 +0000 (0:00:00.174) 0:05:36.618 ********** 2026-04-17 06:00:42.841282 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841294 | orchestrator | 2026-04-17 06:00:42.841305 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:00:42.841317 | orchestrator | Friday 17 April 2026 06:00:34 +0000 (0:00:00.467) 0:05:37.085 ********** 2026-04-17 06:00:42.841327 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841343 | orchestrator | 2026-04-17 06:00:42.841360 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:00:42.841376 | orchestrator | Friday 17 April 2026 06:00:34 +0000 (0:00:00.148) 0:05:37.233 ********** 2026-04-17 06:00:42.841391 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841406 | orchestrator | 2026-04-17 06:00:42.841420 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:00:42.841450 | orchestrator | Friday 17 April 2026 06:00:34 +0000 (0:00:00.150) 0:05:37.384 ********** 2026-04-17 06:00:42.841468 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841485 | orchestrator | 2026-04-17 06:00:42.841501 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:00:42.841518 | orchestrator | Friday 17 April 2026 06:00:34 +0000 (0:00:00.159) 0:05:37.544 ********** 2026-04-17 06:00:42.841534 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841551 | orchestrator | 2026-04-17 06:00:42.841568 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:00:42.841586 | orchestrator | Friday 17 April 2026 06:00:34 +0000 (0:00:00.148) 0:05:37.692 ********** 2026-04-17 06:00:42.841598 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.841608 | orchestrator | 2026-04-17 06:00:42.841618 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:00:42.841633 | orchestrator | Friday 17 April 2026 06:00:35 +0000 (0:00:00.169) 0:05:37.862 ********** 2026-04-17 06:00:42.841649 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.841666 | orchestrator | 2026-04-17 06:00:42.841683 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:00:42.841700 | orchestrator | Friday 17 April 2026 06:00:35 +0000 (0:00:00.231) 0:05:38.093 ********** 2026-04-17 06:00:42.841733 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.841747 | orchestrator | 2026-04-17 06:00:42.841760 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:00:42.841776 | orchestrator | Friday 17 April 2026 06:00:35 +0000 (0:00:00.225) 0:05:38.318 ********** 2026-04-17 06:00:42.841792 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841807 | orchestrator | 2026-04-17 06:00:42.841821 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:00:42.841839 | orchestrator | Friday 17 April 2026 06:00:35 +0000 (0:00:00.135) 0:05:38.453 ********** 2026-04-17 06:00:42.841852 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841862 | orchestrator | 2026-04-17 06:00:42.841871 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:00:42.841903 | orchestrator | Friday 17 April 2026 06:00:35 +0000 (0:00:00.164) 0:05:38.618 ********** 2026-04-17 06:00:42.841913 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841923 | orchestrator | 2026-04-17 06:00:42.841932 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:00:42.841942 | orchestrator | Friday 17 April 2026 06:00:36 +0000 (0:00:00.136) 0:05:38.755 ********** 2026-04-17 06:00:42.841951 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841961 | orchestrator | 2026-04-17 06:00:42.841971 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:00:42.841980 | orchestrator | Friday 17 April 2026 06:00:36 +0000 (0:00:00.151) 0:05:38.907 ********** 2026-04-17 06:00:42.841989 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.841999 | orchestrator | 2026-04-17 06:00:42.842008 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:00:42.842079 | orchestrator | Friday 17 April 2026 06:00:36 +0000 (0:00:00.590) 0:05:39.497 ********** 2026-04-17 06:00:42.842140 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842152 | orchestrator | 2026-04-17 06:00:42.842162 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:00:42.842171 | orchestrator | Friday 17 April 2026 06:00:36 +0000 (0:00:00.157) 0:05:39.655 ********** 2026-04-17 06:00:42.842181 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842190 | orchestrator | 2026-04-17 06:00:42.842200 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:00:42.842211 | orchestrator | Friday 17 April 2026 06:00:37 +0000 (0:00:00.142) 0:05:39.798 ********** 2026-04-17 06:00:42.842220 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842230 | orchestrator | 2026-04-17 06:00:42.842249 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:00:42.842259 | orchestrator | Friday 17 April 2026 06:00:37 +0000 (0:00:00.137) 0:05:39.935 ********** 2026-04-17 06:00:42.842268 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842278 | orchestrator | 2026-04-17 06:00:42.842287 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:00:42.842297 | orchestrator | Friday 17 April 2026 06:00:37 +0000 (0:00:00.155) 0:05:40.091 ********** 2026-04-17 06:00:42.842306 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842316 | orchestrator | 2026-04-17 06:00:42.842325 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:00:42.842349 | orchestrator | Friday 17 April 2026 06:00:37 +0000 (0:00:00.143) 0:05:40.235 ********** 2026-04-17 06:00:42.842359 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842380 | orchestrator | 2026-04-17 06:00:42.842389 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:00:42.842399 | orchestrator | Friday 17 April 2026 06:00:37 +0000 (0:00:00.142) 0:05:40.377 ********** 2026-04-17 06:00:42.842409 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842418 | orchestrator | 2026-04-17 06:00:42.842428 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:00:42.842437 | orchestrator | Friday 17 April 2026 06:00:37 +0000 (0:00:00.206) 0:05:40.584 ********** 2026-04-17 06:00:42.842447 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.842456 | orchestrator | 2026-04-17 06:00:42.842466 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:00:42.842476 | orchestrator | Friday 17 April 2026 06:00:38 +0000 (0:00:00.982) 0:05:41.566 ********** 2026-04-17 06:00:42.842486 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.842495 | orchestrator | 2026-04-17 06:00:42.842505 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:00:42.842514 | orchestrator | Friday 17 April 2026 06:00:40 +0000 (0:00:01.448) 0:05:43.015 ********** 2026-04-17 06:00:42.842524 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-17 06:00:42.842535 | orchestrator | 2026-04-17 06:00:42.842545 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:00:42.842554 | orchestrator | Friday 17 April 2026 06:00:40 +0000 (0:00:00.235) 0:05:43.251 ********** 2026-04-17 06:00:42.842564 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842574 | orchestrator | 2026-04-17 06:00:42.842583 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:00:42.842593 | orchestrator | Friday 17 April 2026 06:00:40 +0000 (0:00:00.138) 0:05:43.389 ********** 2026-04-17 06:00:42.842603 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842612 | orchestrator | 2026-04-17 06:00:42.842622 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:00:42.842632 | orchestrator | Friday 17 April 2026 06:00:41 +0000 (0:00:00.478) 0:05:43.867 ********** 2026-04-17 06:00:42.842641 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:00:42.842651 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:00:42.842660 | orchestrator | 2026-04-17 06:00:42.842670 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:00:42.842679 | orchestrator | Friday 17 April 2026 06:00:42 +0000 (0:00:00.908) 0:05:44.776 ********** 2026-04-17 06:00:42.842689 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:42.842699 | orchestrator | 2026-04-17 06:00:42.842715 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:00:42.842725 | orchestrator | Friday 17 April 2026 06:00:42 +0000 (0:00:00.448) 0:05:45.225 ********** 2026-04-17 06:00:42.842734 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842744 | orchestrator | 2026-04-17 06:00:42.842753 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:00:42.842769 | orchestrator | Friday 17 April 2026 06:00:42 +0000 (0:00:00.153) 0:05:45.378 ********** 2026-04-17 06:00:42.842778 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:42.842788 | orchestrator | 2026-04-17 06:00:42.842797 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:00:42.842807 | orchestrator | Friday 17 April 2026 06:00:42 +0000 (0:00:00.144) 0:05:45.523 ********** 2026-04-17 06:00:42.842826 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.464062 | orchestrator | 2026-04-17 06:00:57.465813 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:00:57.465856 | orchestrator | Friday 17 April 2026 06:00:42 +0000 (0:00:00.137) 0:05:45.660 ********** 2026-04-17 06:00:57.465869 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-17 06:00:57.465881 | orchestrator | 2026-04-17 06:00:57.465893 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:00:57.465904 | orchestrator | Friday 17 April 2026 06:00:43 +0000 (0:00:00.285) 0:05:45.946 ********** 2026-04-17 06:00:57.465915 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:57.465927 | orchestrator | 2026-04-17 06:00:57.465938 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:00:57.465950 | orchestrator | Friday 17 April 2026 06:00:43 +0000 (0:00:00.720) 0:05:46.666 ********** 2026-04-17 06:00:57.465961 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:00:57.465972 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:00:57.465983 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:00:57.465994 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466005 | orchestrator | 2026-04-17 06:00:57.466082 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:00:57.466097 | orchestrator | Friday 17 April 2026 06:00:44 +0000 (0:00:00.151) 0:05:46.818 ********** 2026-04-17 06:00:57.466109 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466143 | orchestrator | 2026-04-17 06:00:57.466154 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:00:57.466165 | orchestrator | Friday 17 April 2026 06:00:44 +0000 (0:00:00.132) 0:05:46.950 ********** 2026-04-17 06:00:57.466175 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466186 | orchestrator | 2026-04-17 06:00:57.466197 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:00:57.466208 | orchestrator | Friday 17 April 2026 06:00:44 +0000 (0:00:00.164) 0:05:47.115 ********** 2026-04-17 06:00:57.466219 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466230 | orchestrator | 2026-04-17 06:00:57.466241 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:00:57.466252 | orchestrator | Friday 17 April 2026 06:00:44 +0000 (0:00:00.161) 0:05:47.277 ********** 2026-04-17 06:00:57.466263 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466274 | orchestrator | 2026-04-17 06:00:57.466285 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:00:57.466296 | orchestrator | Friday 17 April 2026 06:00:45 +0000 (0:00:00.499) 0:05:47.776 ********** 2026-04-17 06:00:57.466307 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466318 | orchestrator | 2026-04-17 06:00:57.466329 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:00:57.466339 | orchestrator | Friday 17 April 2026 06:00:45 +0000 (0:00:00.161) 0:05:47.937 ********** 2026-04-17 06:00:57.466350 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:57.466361 | orchestrator | 2026-04-17 06:00:57.466372 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:00:57.466383 | orchestrator | Friday 17 April 2026 06:00:46 +0000 (0:00:01.654) 0:05:49.591 ********** 2026-04-17 06:00:57.466394 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:57.466405 | orchestrator | 2026-04-17 06:00:57.466439 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:00:57.466450 | orchestrator | Friday 17 April 2026 06:00:46 +0000 (0:00:00.143) 0:05:49.735 ********** 2026-04-17 06:00:57.466461 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-17 06:00:57.466472 | orchestrator | 2026-04-17 06:00:57.466483 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:00:57.466493 | orchestrator | Friday 17 April 2026 06:00:47 +0000 (0:00:00.213) 0:05:49.948 ********** 2026-04-17 06:00:57.466504 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466515 | orchestrator | 2026-04-17 06:00:57.466526 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:00:57.466542 | orchestrator | Friday 17 April 2026 06:00:47 +0000 (0:00:00.167) 0:05:50.116 ********** 2026-04-17 06:00:57.466560 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466580 | orchestrator | 2026-04-17 06:00:57.466598 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:00:57.466615 | orchestrator | Friday 17 April 2026 06:00:47 +0000 (0:00:00.165) 0:05:50.281 ********** 2026-04-17 06:00:57.466633 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466653 | orchestrator | 2026-04-17 06:00:57.466672 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:00:57.466691 | orchestrator | Friday 17 April 2026 06:00:47 +0000 (0:00:00.155) 0:05:50.437 ********** 2026-04-17 06:00:57.466709 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466728 | orchestrator | 2026-04-17 06:00:57.466748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:00:57.466767 | orchestrator | Friday 17 April 2026 06:00:47 +0000 (0:00:00.162) 0:05:50.600 ********** 2026-04-17 06:00:57.466795 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466806 | orchestrator | 2026-04-17 06:00:57.466817 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:00:57.466827 | orchestrator | Friday 17 April 2026 06:00:48 +0000 (0:00:00.205) 0:05:50.805 ********** 2026-04-17 06:00:57.466838 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466849 | orchestrator | 2026-04-17 06:00:57.466860 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:00:57.466870 | orchestrator | Friday 17 April 2026 06:00:48 +0000 (0:00:00.165) 0:05:50.970 ********** 2026-04-17 06:00:57.466881 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466892 | orchestrator | 2026-04-17 06:00:57.466902 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:00:57.466939 | orchestrator | Friday 17 April 2026 06:00:48 +0000 (0:00:00.177) 0:05:51.148 ********** 2026-04-17 06:00:57.466950 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.466961 | orchestrator | 2026-04-17 06:00:57.466972 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:00:57.466983 | orchestrator | Friday 17 April 2026 06:00:48 +0000 (0:00:00.545) 0:05:51.693 ********** 2026-04-17 06:00:57.466994 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:00:57.467004 | orchestrator | 2026-04-17 06:00:57.467015 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:00:57.467026 | orchestrator | Friday 17 April 2026 06:00:49 +0000 (0:00:00.258) 0:05:51.953 ********** 2026-04-17 06:00:57.467042 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-17 06:00:57.467060 | orchestrator | 2026-04-17 06:00:57.467078 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:00:57.467096 | orchestrator | Friday 17 April 2026 06:00:49 +0000 (0:00:00.216) 0:05:52.169 ********** 2026-04-17 06:00:57.467142 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-17 06:00:57.467163 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-17 06:00:57.467180 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-17 06:00:57.467235 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-17 06:00:57.467255 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-17 06:00:57.467273 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-17 06:00:57.467292 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-17 06:00:57.467311 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:00:57.467354 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:00:57.467367 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:00:57.467378 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:00:57.467389 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:00:57.467399 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:00:57.467410 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:00:57.467421 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-17 06:00:57.467432 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-17 06:00:57.467443 | orchestrator | 2026-04-17 06:00:57.467454 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:00:57.467465 | orchestrator | Friday 17 April 2026 06:00:55 +0000 (0:00:05.908) 0:05:58.077 ********** 2026-04-17 06:00:57.467475 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467486 | orchestrator | 2026-04-17 06:00:57.467497 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:00:57.467508 | orchestrator | Friday 17 April 2026 06:00:55 +0000 (0:00:00.163) 0:05:58.241 ********** 2026-04-17 06:00:57.467519 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467529 | orchestrator | 2026-04-17 06:00:57.467540 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:00:57.467551 | orchestrator | Friday 17 April 2026 06:00:55 +0000 (0:00:00.146) 0:05:58.387 ********** 2026-04-17 06:00:57.467562 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467573 | orchestrator | 2026-04-17 06:00:57.467583 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:00:57.467594 | orchestrator | Friday 17 April 2026 06:00:55 +0000 (0:00:00.130) 0:05:58.518 ********** 2026-04-17 06:00:57.467605 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467616 | orchestrator | 2026-04-17 06:00:57.467626 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:00:57.467637 | orchestrator | Friday 17 April 2026 06:00:55 +0000 (0:00:00.158) 0:05:58.676 ********** 2026-04-17 06:00:57.467648 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467659 | orchestrator | 2026-04-17 06:00:57.467670 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:00:57.467681 | orchestrator | Friday 17 April 2026 06:00:56 +0000 (0:00:00.152) 0:05:58.829 ********** 2026-04-17 06:00:57.467691 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467702 | orchestrator | 2026-04-17 06:00:57.467713 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:00:57.467724 | orchestrator | Friday 17 April 2026 06:00:56 +0000 (0:00:00.154) 0:05:58.983 ********** 2026-04-17 06:00:57.467735 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467746 | orchestrator | 2026-04-17 06:00:57.467757 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:00:57.467768 | orchestrator | Friday 17 April 2026 06:00:56 +0000 (0:00:00.149) 0:05:59.132 ********** 2026-04-17 06:00:57.467778 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467789 | orchestrator | 2026-04-17 06:00:57.467800 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:00:57.467818 | orchestrator | Friday 17 April 2026 06:00:56 +0000 (0:00:00.574) 0:05:59.707 ********** 2026-04-17 06:00:57.467837 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467848 | orchestrator | 2026-04-17 06:00:57.467859 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:00:57.467870 | orchestrator | Friday 17 April 2026 06:00:57 +0000 (0:00:00.153) 0:05:59.861 ********** 2026-04-17 06:00:57.467881 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467891 | orchestrator | 2026-04-17 06:00:57.467902 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:00:57.467913 | orchestrator | Friday 17 April 2026 06:00:57 +0000 (0:00:00.153) 0:06:00.015 ********** 2026-04-17 06:00:57.467924 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:00:57.467935 | orchestrator | 2026-04-17 06:00:57.467956 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:01:17.257086 | orchestrator | Friday 17 April 2026 06:00:57 +0000 (0:00:00.188) 0:06:00.203 ********** 2026-04-17 06:01:17.257229 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257246 | orchestrator | 2026-04-17 06:01:17.257258 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:01:17.257268 | orchestrator | Friday 17 April 2026 06:00:57 +0000 (0:00:00.158) 0:06:00.362 ********** 2026-04-17 06:01:17.257278 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257288 | orchestrator | 2026-04-17 06:01:17.257298 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:01:17.257308 | orchestrator | Friday 17 April 2026 06:00:57 +0000 (0:00:00.244) 0:06:00.606 ********** 2026-04-17 06:01:17.257317 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257327 | orchestrator | 2026-04-17 06:01:17.257337 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:01:17.257346 | orchestrator | Friday 17 April 2026 06:00:58 +0000 (0:00:00.155) 0:06:00.762 ********** 2026-04-17 06:01:17.257356 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257365 | orchestrator | 2026-04-17 06:01:17.257374 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:01:17.257384 | orchestrator | Friday 17 April 2026 06:00:58 +0000 (0:00:00.268) 0:06:01.030 ********** 2026-04-17 06:01:17.257393 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257403 | orchestrator | 2026-04-17 06:01:17.257413 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:01:17.257422 | orchestrator | Friday 17 April 2026 06:00:58 +0000 (0:00:00.139) 0:06:01.170 ********** 2026-04-17 06:01:17.257432 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257441 | orchestrator | 2026-04-17 06:01:17.257451 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:01:17.257462 | orchestrator | Friday 17 April 2026 06:00:58 +0000 (0:00:00.155) 0:06:01.325 ********** 2026-04-17 06:01:17.257472 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257482 | orchestrator | 2026-04-17 06:01:17.257492 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:01:17.257501 | orchestrator | Friday 17 April 2026 06:00:58 +0000 (0:00:00.151) 0:06:01.477 ********** 2026-04-17 06:01:17.257511 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257520 | orchestrator | 2026-04-17 06:01:17.257530 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:01:17.257540 | orchestrator | Friday 17 April 2026 06:00:58 +0000 (0:00:00.155) 0:06:01.632 ********** 2026-04-17 06:01:17.257549 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257561 | orchestrator | 2026-04-17 06:01:17.257572 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:01:17.257583 | orchestrator | Friday 17 April 2026 06:00:59 +0000 (0:00:00.164) 0:06:01.797 ********** 2026-04-17 06:01:17.257594 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257605 | orchestrator | 2026-04-17 06:01:17.257616 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:01:17.257628 | orchestrator | Friday 17 April 2026 06:00:59 +0000 (0:00:00.156) 0:06:01.954 ********** 2026-04-17 06:01:17.257678 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:01:17.257692 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:01:17.257703 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:01:17.257714 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257725 | orchestrator | 2026-04-17 06:01:17.257737 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:01:17.257748 | orchestrator | Friday 17 April 2026 06:01:00 +0000 (0:00:01.339) 0:06:03.293 ********** 2026-04-17 06:01:17.257759 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:01:17.257769 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:01:17.257780 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:01:17.257791 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257802 | orchestrator | 2026-04-17 06:01:17.257813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:01:17.257824 | orchestrator | Friday 17 April 2026 06:01:00 +0000 (0:00:00.433) 0:06:03.727 ********** 2026-04-17 06:01:17.257835 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:01:17.257846 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:01:17.257857 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:01:17.257868 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257879 | orchestrator | 2026-04-17 06:01:17.257890 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:01:17.257902 | orchestrator | Friday 17 April 2026 06:01:01 +0000 (0:00:00.454) 0:06:04.181 ********** 2026-04-17 06:01:17.257913 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257923 | orchestrator | 2026-04-17 06:01:17.257932 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:01:17.257942 | orchestrator | Friday 17 April 2026 06:01:01 +0000 (0:00:00.152) 0:06:04.333 ********** 2026-04-17 06:01:17.257977 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-17 06:01:17.257988 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.257998 | orchestrator | 2026-04-17 06:01:17.258007 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:01:17.258077 | orchestrator | Friday 17 April 2026 06:01:01 +0000 (0:00:00.399) 0:06:04.733 ********** 2026-04-17 06:01:17.258090 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:01:17.258099 | orchestrator | 2026-04-17 06:01:17.258127 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:01:17.258137 | orchestrator | Friday 17 April 2026 06:01:02 +0000 (0:00:00.861) 0:06:05.594 ********** 2026-04-17 06:01:17.258147 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258157 | orchestrator | 2026-04-17 06:01:17.258167 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-17 06:01:17.258196 | orchestrator | Friday 17 April 2026 06:01:03 +0000 (0:00:00.187) 0:06:05.782 ********** 2026-04-17 06:01:17.258206 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-04-17 06:01:17.258217 | orchestrator | 2026-04-17 06:01:17.258226 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-17 06:01:17.258236 | orchestrator | Friday 17 April 2026 06:01:03 +0000 (0:00:00.268) 0:06:06.051 ********** 2026-04-17 06:01:17.258245 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:01:17.258255 | orchestrator | 2026-04-17 06:01:17.258264 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-17 06:01:17.258274 | orchestrator | Friday 17 April 2026 06:01:05 +0000 (0:00:02.160) 0:06:08.212 ********** 2026-04-17 06:01:17.258284 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.258293 | orchestrator | 2026-04-17 06:01:17.258303 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-17 06:01:17.258322 | orchestrator | Friday 17 April 2026 06:01:05 +0000 (0:00:00.171) 0:06:08.383 ********** 2026-04-17 06:01:17.258332 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258341 | orchestrator | 2026-04-17 06:01:17.258351 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-17 06:01:17.258360 | orchestrator | Friday 17 April 2026 06:01:06 +0000 (0:00:00.556) 0:06:08.940 ********** 2026-04-17 06:01:17.258370 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258379 | orchestrator | 2026-04-17 06:01:17.258389 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-17 06:01:17.258398 | orchestrator | Friday 17 April 2026 06:01:06 +0000 (0:00:00.185) 0:06:09.126 ********** 2026-04-17 06:01:17.258408 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:01:17.258417 | orchestrator | 2026-04-17 06:01:17.258426 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-17 06:01:17.258436 | orchestrator | Friday 17 April 2026 06:01:07 +0000 (0:00:01.135) 0:06:10.262 ********** 2026-04-17 06:01:17.258445 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258455 | orchestrator | 2026-04-17 06:01:17.258464 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-17 06:01:17.258474 | orchestrator | Friday 17 April 2026 06:01:08 +0000 (0:00:00.677) 0:06:10.940 ********** 2026-04-17 06:01:17.258483 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258493 | orchestrator | 2026-04-17 06:01:17.258502 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-17 06:01:17.258512 | orchestrator | Friday 17 April 2026 06:01:08 +0000 (0:00:00.491) 0:06:11.432 ********** 2026-04-17 06:01:17.258521 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258531 | orchestrator | 2026-04-17 06:01:17.258545 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-17 06:01:17.258561 | orchestrator | Friday 17 April 2026 06:01:09 +0000 (0:00:00.497) 0:06:11.930 ********** 2026-04-17 06:01:17.258578 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:01:17.258597 | orchestrator | 2026-04-17 06:01:17.258614 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-17 06:01:17.258631 | orchestrator | Friday 17 April 2026 06:01:09 +0000 (0:00:00.585) 0:06:12.515 ********** 2026-04-17 06:01:17.258642 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:01:17.258651 | orchestrator | 2026-04-17 06:01:17.258661 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-17 06:01:17.258670 | orchestrator | Friday 17 April 2026 06:01:10 +0000 (0:00:00.597) 0:06:13.112 ********** 2026-04-17 06:01:17.258680 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:01:17.258690 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-17 06:01:17.258699 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 06:01:17.258713 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-17 06:01:17.258727 | orchestrator | 2026-04-17 06:01:17.258737 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-17 06:01:17.258746 | orchestrator | Friday 17 April 2026 06:01:13 +0000 (0:00:02.831) 0:06:15.943 ********** 2026-04-17 06:01:17.258756 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:01:17.258765 | orchestrator | 2026-04-17 06:01:17.258775 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-17 06:01:17.258785 | orchestrator | Friday 17 April 2026 06:01:14 +0000 (0:00:01.132) 0:06:17.076 ********** 2026-04-17 06:01:17.258794 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258803 | orchestrator | 2026-04-17 06:01:17.258813 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-17 06:01:17.258823 | orchestrator | Friday 17 April 2026 06:01:14 +0000 (0:00:00.142) 0:06:17.218 ********** 2026-04-17 06:01:17.258832 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258842 | orchestrator | 2026-04-17 06:01:17.258851 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-17 06:01:17.258867 | orchestrator | Friday 17 April 2026 06:01:14 +0000 (0:00:00.192) 0:06:17.411 ********** 2026-04-17 06:01:17.258876 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258886 | orchestrator | 2026-04-17 06:01:17.258895 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-17 06:01:17.258911 | orchestrator | Friday 17 April 2026 06:01:16 +0000 (0:00:01.342) 0:06:18.753 ********** 2026-04-17 06:01:17.258921 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:01:17.258930 | orchestrator | 2026-04-17 06:01:17.258940 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-17 06:01:17.258949 | orchestrator | Friday 17 April 2026 06:01:16 +0000 (0:00:00.875) 0:06:19.629 ********** 2026-04-17 06:01:17.258959 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:01:17.258968 | orchestrator | 2026-04-17 06:01:17.258978 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-17 06:01:17.258987 | orchestrator | Friday 17 April 2026 06:01:17 +0000 (0:00:00.135) 0:06:19.765 ********** 2026-04-17 06:01:17.258997 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-04-17 06:01:17.259007 | orchestrator | 2026-04-17 06:01:17.259023 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-17 06:02:06.187311 | orchestrator | Friday 17 April 2026 06:01:17 +0000 (0:00:00.229) 0:06:19.994 ********** 2026-04-17 06:02:06.187432 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.187449 | orchestrator | 2026-04-17 06:02:06.187462 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-17 06:02:06.187475 | orchestrator | Friday 17 April 2026 06:01:17 +0000 (0:00:00.131) 0:06:20.126 ********** 2026-04-17 06:02:06.187487 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.187498 | orchestrator | 2026-04-17 06:02:06.187509 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-17 06:02:06.187520 | orchestrator | Friday 17 April 2026 06:01:17 +0000 (0:00:00.136) 0:06:20.263 ********** 2026-04-17 06:02:06.187531 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-04-17 06:02:06.187541 | orchestrator | 2026-04-17 06:02:06.187552 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-17 06:02:06.187563 | orchestrator | Friday 17 April 2026 06:01:17 +0000 (0:00:00.218) 0:06:20.481 ********** 2026-04-17 06:02:06.187574 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:02:06.187585 | orchestrator | 2026-04-17 06:02:06.187596 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-17 06:02:06.187607 | orchestrator | Friday 17 April 2026 06:01:19 +0000 (0:00:01.376) 0:06:21.857 ********** 2026-04-17 06:02:06.187618 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:02:06.187628 | orchestrator | 2026-04-17 06:02:06.187639 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-17 06:02:06.187650 | orchestrator | Friday 17 April 2026 06:01:20 +0000 (0:00:00.981) 0:06:22.838 ********** 2026-04-17 06:02:06.187661 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:02:06.187672 | orchestrator | 2026-04-17 06:02:06.187683 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-17 06:02:06.187693 | orchestrator | Friday 17 April 2026 06:01:21 +0000 (0:00:01.475) 0:06:24.314 ********** 2026-04-17 06:02:06.187704 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:02:06.187715 | orchestrator | 2026-04-17 06:02:06.187726 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-17 06:02:06.187737 | orchestrator | Friday 17 April 2026 06:01:24 +0000 (0:00:03.409) 0:06:27.723 ********** 2026-04-17 06:02:06.187747 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-04-17 06:02:06.187759 | orchestrator | 2026-04-17 06:02:06.187770 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-17 06:02:06.187789 | orchestrator | Friday 17 April 2026 06:01:25 +0000 (0:00:00.628) 0:06:28.351 ********** 2026-04-17 06:02:06.187840 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-17 06:02:06.187862 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:02:06.187882 | orchestrator | 2026-04-17 06:02:06.187896 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-17 06:02:06.187910 | orchestrator | Friday 17 April 2026 06:01:47 +0000 (0:00:21.983) 0:06:50.334 ********** 2026-04-17 06:02:06.187923 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:02:06.187936 | orchestrator | 2026-04-17 06:02:06.187948 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-17 06:02:06.187959 | orchestrator | Friday 17 April 2026 06:01:49 +0000 (0:00:02.045) 0:06:52.380 ********** 2026-04-17 06:02:06.187969 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.187980 | orchestrator | 2026-04-17 06:02:06.187991 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-17 06:02:06.188002 | orchestrator | Friday 17 April 2026 06:01:49 +0000 (0:00:00.135) 0:06:52.516 ********** 2026-04-17 06:02:06.188015 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-17 06:02:06.188029 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-17 06:02:06.188056 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-17 06:02:06.188068 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-17 06:02:06.188137 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-17 06:02:06.188162 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}])  2026-04-17 06:02:06.188183 | orchestrator | 2026-04-17 06:02:06.188201 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-17 06:02:06.188220 | orchestrator | Friday 17 April 2026 06:01:58 +0000 (0:00:09.023) 0:07:01.540 ********** 2026-04-17 06:02:06.188238 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:02:06.188258 | orchestrator | 2026-04-17 06:02:06.188277 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:02:06.188296 | orchestrator | Friday 17 April 2026 06:02:00 +0000 (0:00:01.517) 0:07:03.057 ********** 2026-04-17 06:02:06.188320 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:02:06.188331 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-17 06:02:06.188342 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-17 06:02:06.188352 | orchestrator | 2026-04-17 06:02:06.188363 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:02:06.188374 | orchestrator | Friday 17 April 2026 06:02:01 +0000 (0:00:01.285) 0:07:04.342 ********** 2026-04-17 06:02:06.188384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 06:02:06.188396 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 06:02:06.188407 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 06:02:06.188418 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188428 | orchestrator | 2026-04-17 06:02:06.188439 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-17 06:02:06.188450 | orchestrator | Friday 17 April 2026 06:02:02 +0000 (0:00:00.509) 0:07:04.852 ********** 2026-04-17 06:02:06.188460 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188471 | orchestrator | 2026-04-17 06:02:06.188482 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-17 06:02:06.188492 | orchestrator | Friday 17 April 2026 06:02:02 +0000 (0:00:00.131) 0:07:04.984 ********** 2026-04-17 06:02:06.188503 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:02:06.188513 | orchestrator | 2026-04-17 06:02:06.188524 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 06:02:06.188535 | orchestrator | Friday 17 April 2026 06:02:03 +0000 (0:00:01.443) 0:07:06.427 ********** 2026-04-17 06:02:06.188545 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188556 | orchestrator | 2026-04-17 06:02:06.188566 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 06:02:06.188577 | orchestrator | Friday 17 April 2026 06:02:04 +0000 (0:00:00.514) 0:07:06.942 ********** 2026-04-17 06:02:06.188588 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188598 | orchestrator | 2026-04-17 06:02:06.188609 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 06:02:06.188619 | orchestrator | Friday 17 April 2026 06:02:04 +0000 (0:00:00.132) 0:07:07.074 ********** 2026-04-17 06:02:06.188630 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188641 | orchestrator | 2026-04-17 06:02:06.188651 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 06:02:06.188662 | orchestrator | Friday 17 April 2026 06:02:04 +0000 (0:00:00.139) 0:07:07.214 ********** 2026-04-17 06:02:06.188673 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188684 | orchestrator | 2026-04-17 06:02:06.188694 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 06:02:06.188705 | orchestrator | Friday 17 April 2026 06:02:04 +0000 (0:00:00.140) 0:07:07.354 ********** 2026-04-17 06:02:06.188716 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188726 | orchestrator | 2026-04-17 06:02:06.188737 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-17 06:02:06.188747 | orchestrator | Friday 17 April 2026 06:02:04 +0000 (0:00:00.158) 0:07:07.513 ********** 2026-04-17 06:02:06.188758 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188769 | orchestrator | 2026-04-17 06:02:06.188779 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 06:02:06.188797 | orchestrator | Friday 17 April 2026 06:02:04 +0000 (0:00:00.154) 0:07:07.667 ********** 2026-04-17 06:02:06.188808 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:02:06.188819 | orchestrator | 2026-04-17 06:02:06.188829 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-17 06:02:06.188840 | orchestrator | 2026-04-17 06:02:06.188851 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-17 06:02:06.188877 | orchestrator | Friday 17 April 2026 06:02:05 +0000 (0:00:00.606) 0:07:08.274 ********** 2026-04-17 06:02:06.188896 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:06.188914 | orchestrator | 2026-04-17 06:02:06.188932 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-17 06:02:06.188950 | orchestrator | Friday 17 April 2026 06:02:06 +0000 (0:00:00.497) 0:07:08.772 ********** 2026-04-17 06:02:06.188968 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:06.188987 | orchestrator | 2026-04-17 06:02:06.189006 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-17 06:02:06.189036 | orchestrator | Friday 17 April 2026 06:02:06 +0000 (0:00:00.153) 0:07:08.925 ********** 2026-04-17 06:02:15.473045 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:15.473181 | orchestrator | 2026-04-17 06:02:15.473199 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-17 06:02:15.473211 | orchestrator | Friday 17 April 2026 06:02:06 +0000 (0:00:00.135) 0:07:09.061 ********** 2026-04-17 06:02:15.473223 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473234 | orchestrator | 2026-04-17 06:02:15.473245 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:02:15.473256 | orchestrator | Friday 17 April 2026 06:02:06 +0000 (0:00:00.153) 0:07:09.215 ********** 2026-04-17 06:02:15.473267 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-17 06:02:15.473278 | orchestrator | 2026-04-17 06:02:15.473289 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:02:15.473300 | orchestrator | Friday 17 April 2026 06:02:07 +0000 (0:00:00.611) 0:07:09.826 ********** 2026-04-17 06:02:15.473311 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473322 | orchestrator | 2026-04-17 06:02:15.473333 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:02:15.473344 | orchestrator | Friday 17 April 2026 06:02:07 +0000 (0:00:00.490) 0:07:10.316 ********** 2026-04-17 06:02:15.473354 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473365 | orchestrator | 2026-04-17 06:02:15.473376 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:02:15.473387 | orchestrator | Friday 17 April 2026 06:02:07 +0000 (0:00:00.154) 0:07:10.471 ********** 2026-04-17 06:02:15.473397 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473408 | orchestrator | 2026-04-17 06:02:15.473420 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:02:15.473431 | orchestrator | Friday 17 April 2026 06:02:08 +0000 (0:00:00.504) 0:07:10.976 ********** 2026-04-17 06:02:15.473442 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473453 | orchestrator | 2026-04-17 06:02:15.473463 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:02:15.473474 | orchestrator | Friday 17 April 2026 06:02:08 +0000 (0:00:00.154) 0:07:11.131 ********** 2026-04-17 06:02:15.473485 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473496 | orchestrator | 2026-04-17 06:02:15.473507 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:02:15.473518 | orchestrator | Friday 17 April 2026 06:02:08 +0000 (0:00:00.166) 0:07:11.297 ********** 2026-04-17 06:02:15.473529 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473541 | orchestrator | 2026-04-17 06:02:15.473554 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:02:15.473567 | orchestrator | Friday 17 April 2026 06:02:08 +0000 (0:00:00.164) 0:07:11.461 ********** 2026-04-17 06:02:15.473580 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:15.473593 | orchestrator | 2026-04-17 06:02:15.473606 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:02:15.473618 | orchestrator | Friday 17 April 2026 06:02:08 +0000 (0:00:00.156) 0:07:11.618 ********** 2026-04-17 06:02:15.473630 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473642 | orchestrator | 2026-04-17 06:02:15.473655 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:02:15.473690 | orchestrator | Friday 17 April 2026 06:02:09 +0000 (0:00:00.152) 0:07:11.770 ********** 2026-04-17 06:02:15.473705 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:02:15.473718 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:02:15.473731 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:02:15.473744 | orchestrator | 2026-04-17 06:02:15.473757 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:02:15.473769 | orchestrator | Friday 17 April 2026 06:02:10 +0000 (0:00:01.108) 0:07:12.879 ********** 2026-04-17 06:02:15.473781 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:15.473794 | orchestrator | 2026-04-17 06:02:15.473808 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:02:15.473820 | orchestrator | Friday 17 April 2026 06:02:10 +0000 (0:00:00.265) 0:07:13.144 ********** 2026-04-17 06:02:15.473833 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:02:15.473846 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:02:15.473859 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:02:15.473872 | orchestrator | 2026-04-17 06:02:15.473885 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:02:15.473897 | orchestrator | Friday 17 April 2026 06:02:12 +0000 (0:00:02.253) 0:07:15.398 ********** 2026-04-17 06:02:15.473907 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 06:02:15.473919 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 06:02:15.473929 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 06:02:15.473954 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:15.473966 | orchestrator | 2026-04-17 06:02:15.473976 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:02:15.473987 | orchestrator | Friday 17 April 2026 06:02:13 +0000 (0:00:00.876) 0:07:16.275 ********** 2026-04-17 06:02:15.473999 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:02:15.474014 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:02:15.474131 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:02:15.474145 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:15.474156 | orchestrator | 2026-04-17 06:02:15.474166 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:02:15.474177 | orchestrator | Friday 17 April 2026 06:02:14 +0000 (0:00:01.196) 0:07:17.471 ********** 2026-04-17 06:02:15.474190 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:15.474205 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:15.474226 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:15.474237 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:15.474248 | orchestrator | 2026-04-17 06:02:15.474259 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:02:15.474270 | orchestrator | Friday 17 April 2026 06:02:15 +0000 (0:00:00.600) 0:07:18.072 ********** 2026-04-17 06:02:15.474283 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:02:10.913979', 'end': '2026-04-17 06:02:10.961620', 'delta': '0:00:00.047641', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:02:15.474315 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:02:11.867191', 'end': '2026-04-17 06:02:11.914458', 'delta': '0:00:00.047267', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:02:15.474335 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f2e2f728469b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:02:12.461890', 'end': '2026-04-17 06:02:12.518652', 'delta': '0:00:00.056762', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2e2f728469b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:02:19.315703 | orchestrator | 2026-04-17 06:02:19.315806 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:02:19.315820 | orchestrator | Friday 17 April 2026 06:02:15 +0000 (0:00:00.247) 0:07:18.320 ********** 2026-04-17 06:02:19.315828 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:19.315837 | orchestrator | 2026-04-17 06:02:19.315846 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:02:19.315854 | orchestrator | Friday 17 April 2026 06:02:15 +0000 (0:00:00.281) 0:07:18.602 ********** 2026-04-17 06:02:19.315862 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.315871 | orchestrator | 2026-04-17 06:02:19.315879 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:02:19.315887 | orchestrator | Friday 17 April 2026 06:02:16 +0000 (0:00:00.306) 0:07:18.908 ********** 2026-04-17 06:02:19.315915 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:19.315923 | orchestrator | 2026-04-17 06:02:19.315931 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:02:19.315939 | orchestrator | Friday 17 April 2026 06:02:16 +0000 (0:00:00.162) 0:07:19.070 ********** 2026-04-17 06:02:19.315947 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:02:19.315955 | orchestrator | 2026-04-17 06:02:19.315963 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:02:19.315971 | orchestrator | Friday 17 April 2026 06:02:17 +0000 (0:00:00.945) 0:07:20.016 ********** 2026-04-17 06:02:19.315979 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:19.315987 | orchestrator | 2026-04-17 06:02:19.315995 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:02:19.316003 | orchestrator | Friday 17 April 2026 06:02:17 +0000 (0:00:00.140) 0:07:20.156 ********** 2026-04-17 06:02:19.316011 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316019 | orchestrator | 2026-04-17 06:02:19.316027 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:02:19.316035 | orchestrator | Friday 17 April 2026 06:02:17 +0000 (0:00:00.151) 0:07:20.307 ********** 2026-04-17 06:02:19.316042 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316050 | orchestrator | 2026-04-17 06:02:19.316058 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:02:19.316066 | orchestrator | Friday 17 April 2026 06:02:17 +0000 (0:00:00.286) 0:07:20.594 ********** 2026-04-17 06:02:19.316073 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316081 | orchestrator | 2026-04-17 06:02:19.316116 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:02:19.316124 | orchestrator | Friday 17 April 2026 06:02:17 +0000 (0:00:00.132) 0:07:20.726 ********** 2026-04-17 06:02:19.316132 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316139 | orchestrator | 2026-04-17 06:02:19.316147 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:02:19.316155 | orchestrator | Friday 17 April 2026 06:02:18 +0000 (0:00:00.150) 0:07:20.877 ********** 2026-04-17 06:02:19.316163 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316171 | orchestrator | 2026-04-17 06:02:19.316179 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:02:19.316186 | orchestrator | Friday 17 April 2026 06:02:18 +0000 (0:00:00.128) 0:07:21.005 ********** 2026-04-17 06:02:19.316194 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316202 | orchestrator | 2026-04-17 06:02:19.316210 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:02:19.316218 | orchestrator | Friday 17 April 2026 06:02:18 +0000 (0:00:00.132) 0:07:21.138 ********** 2026-04-17 06:02:19.316225 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316233 | orchestrator | 2026-04-17 06:02:19.316241 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:02:19.316249 | orchestrator | Friday 17 April 2026 06:02:18 +0000 (0:00:00.496) 0:07:21.635 ********** 2026-04-17 06:02:19.316257 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316264 | orchestrator | 2026-04-17 06:02:19.316272 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:02:19.316281 | orchestrator | Friday 17 April 2026 06:02:19 +0000 (0:00:00.128) 0:07:21.763 ********** 2026-04-17 06:02:19.316289 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.316297 | orchestrator | 2026-04-17 06:02:19.316305 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:02:19.316313 | orchestrator | Friday 17 April 2026 06:02:19 +0000 (0:00:00.167) 0:07:21.931 ********** 2026-04-17 06:02:19.316336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:02:19.316354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:02:19.316379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:02:19.316388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-36-58-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:02:19.316399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:02:19.316407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:02:19.316415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:02:19.316437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60cf27b4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:02:19.550933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:02:19.551036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:02:19.551053 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:19.551066 | orchestrator | 2026-04-17 06:02:19.551079 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:02:19.551142 | orchestrator | Friday 17 April 2026 06:02:19 +0000 (0:00:00.242) 0:07:22.173 ********** 2026-04-17 06:02:19.551165 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:19.551183 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:19.551195 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:19.551249 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-36-58-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:19.551282 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:19.551294 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:19.551306 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:19.551327 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60cf27b4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:19.551358 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:33.893893 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:02:33.893999 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894064 | orchestrator | 2026-04-17 06:02:33.894120 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:02:33.894132 | orchestrator | Friday 17 April 2026 06:02:19 +0000 (0:00:00.236) 0:07:22.410 ********** 2026-04-17 06:02:33.894141 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:33.894150 | orchestrator | 2026-04-17 06:02:33.894159 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:02:33.894168 | orchestrator | Friday 17 April 2026 06:02:20 +0000 (0:00:00.508) 0:07:22.919 ********** 2026-04-17 06:02:33.894177 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:33.894185 | orchestrator | 2026-04-17 06:02:33.894194 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:02:33.894203 | orchestrator | Friday 17 April 2026 06:02:20 +0000 (0:00:00.149) 0:07:23.069 ********** 2026-04-17 06:02:33.894211 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:33.894220 | orchestrator | 2026-04-17 06:02:33.894229 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:02:33.894238 | orchestrator | Friday 17 April 2026 06:02:20 +0000 (0:00:00.491) 0:07:23.560 ********** 2026-04-17 06:02:33.894246 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894255 | orchestrator | 2026-04-17 06:02:33.894263 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:02:33.894272 | orchestrator | Friday 17 April 2026 06:02:20 +0000 (0:00:00.151) 0:07:23.712 ********** 2026-04-17 06:02:33.894303 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894312 | orchestrator | 2026-04-17 06:02:33.894321 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:02:33.894329 | orchestrator | Friday 17 April 2026 06:02:21 +0000 (0:00:00.246) 0:07:23.958 ********** 2026-04-17 06:02:33.894338 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894347 | orchestrator | 2026-04-17 06:02:33.894355 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:02:33.894363 | orchestrator | Friday 17 April 2026 06:02:21 +0000 (0:00:00.165) 0:07:24.124 ********** 2026-04-17 06:02:33.894372 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-17 06:02:33.894381 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-17 06:02:33.894390 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:02:33.894403 | orchestrator | 2026-04-17 06:02:33.894418 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:02:33.894432 | orchestrator | Friday 17 April 2026 06:02:22 +0000 (0:00:01.141) 0:07:25.266 ********** 2026-04-17 06:02:33.894447 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 06:02:33.894461 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 06:02:33.894477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 06:02:33.894491 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894501 | orchestrator | 2026-04-17 06:02:33.894511 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:02:33.894521 | orchestrator | Friday 17 April 2026 06:02:22 +0000 (0:00:00.170) 0:07:25.436 ********** 2026-04-17 06:02:33.894531 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894541 | orchestrator | 2026-04-17 06:02:33.894551 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:02:33.894561 | orchestrator | Friday 17 April 2026 06:02:23 +0000 (0:00:00.538) 0:07:25.974 ********** 2026-04-17 06:02:33.894584 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:02:33.894595 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:02:33.894605 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:02:33.894615 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:02:33.894625 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:02:33.894635 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:02:33.894645 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:02:33.894654 | orchestrator | 2026-04-17 06:02:33.894664 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:02:33.894674 | orchestrator | Friday 17 April 2026 06:02:24 +0000 (0:00:00.871) 0:07:26.846 ********** 2026-04-17 06:02:33.894685 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:02:33.894694 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:02:33.894704 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:02:33.894714 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:02:33.894740 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:02:33.894752 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:02:33.894762 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:02:33.894774 | orchestrator | 2026-04-17 06:02:33.894790 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-17 06:02:33.894804 | orchestrator | Friday 17 April 2026 06:02:25 +0000 (0:00:01.734) 0:07:28.580 ********** 2026-04-17 06:02:33.894829 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894843 | orchestrator | 2026-04-17 06:02:33.894857 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-17 06:02:33.894871 | orchestrator | Friday 17 April 2026 06:02:26 +0000 (0:00:00.248) 0:07:28.829 ********** 2026-04-17 06:02:33.894884 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894897 | orchestrator | 2026-04-17 06:02:33.894912 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-17 06:02:33.894926 | orchestrator | Friday 17 April 2026 06:02:26 +0000 (0:00:00.249) 0:07:29.078 ********** 2026-04-17 06:02:33.894941 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.894956 | orchestrator | 2026-04-17 06:02:33.894970 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-17 06:02:33.894984 | orchestrator | Friday 17 April 2026 06:02:26 +0000 (0:00:00.140) 0:07:29.219 ********** 2026-04-17 06:02:33.894999 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.895015 | orchestrator | 2026-04-17 06:02:33.895027 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-17 06:02:33.895036 | orchestrator | Friday 17 April 2026 06:02:26 +0000 (0:00:00.250) 0:07:29.469 ********** 2026-04-17 06:02:33.895044 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.895053 | orchestrator | 2026-04-17 06:02:33.895061 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-17 06:02:33.895070 | orchestrator | Friday 17 April 2026 06:02:26 +0000 (0:00:00.148) 0:07:29.617 ********** 2026-04-17 06:02:33.895103 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 06:02:33.895113 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 06:02:33.895122 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 06:02:33.895130 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.895139 | orchestrator | 2026-04-17 06:02:33.895147 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-17 06:02:33.895156 | orchestrator | Friday 17 April 2026 06:02:27 +0000 (0:00:00.425) 0:07:30.043 ********** 2026-04-17 06:02:33.895164 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-17 06:02:33.895173 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-17 06:02:33.895181 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-17 06:02:33.895189 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-17 06:02:33.895198 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-17 06:02:33.895206 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-17 06:02:33.895214 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.895223 | orchestrator | 2026-04-17 06:02:33.895231 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-17 06:02:33.895240 | orchestrator | Friday 17 April 2026 06:02:28 +0000 (0:00:01.170) 0:07:31.214 ********** 2026-04-17 06:02:33.895248 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:02:33.895257 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:02:33.895265 | orchestrator | 2026-04-17 06:02:33.895273 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-17 06:02:33.895282 | orchestrator | Friday 17 April 2026 06:02:31 +0000 (0:00:02.678) 0:07:33.892 ********** 2026-04-17 06:02:33.895290 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:02:33.895299 | orchestrator | 2026-04-17 06:02:33.895307 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:02:33.895316 | orchestrator | Friday 17 April 2026 06:02:32 +0000 (0:00:01.342) 0:07:35.235 ********** 2026-04-17 06:02:33.895332 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-17 06:02:33.895349 | orchestrator | 2026-04-17 06:02:33.895358 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:02:33.895366 | orchestrator | Friday 17 April 2026 06:02:32 +0000 (0:00:00.424) 0:07:35.659 ********** 2026-04-17 06:02:33.895375 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-17 06:02:33.895383 | orchestrator | 2026-04-17 06:02:33.895392 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:02:33.895400 | orchestrator | Friday 17 April 2026 06:02:33 +0000 (0:00:00.190) 0:07:35.850 ********** 2026-04-17 06:02:33.895409 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:33.895418 | orchestrator | 2026-04-17 06:02:33.895426 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:02:33.895435 | orchestrator | Friday 17 April 2026 06:02:33 +0000 (0:00:00.518) 0:07:36.368 ********** 2026-04-17 06:02:33.895443 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.895451 | orchestrator | 2026-04-17 06:02:33.895460 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:02:33.895469 | orchestrator | Friday 17 April 2026 06:02:33 +0000 (0:00:00.125) 0:07:36.494 ********** 2026-04-17 06:02:33.895477 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:33.895485 | orchestrator | 2026-04-17 06:02:33.895494 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:02:33.895510 | orchestrator | Friday 17 April 2026 06:02:33 +0000 (0:00:00.136) 0:07:36.630 ********** 2026-04-17 06:02:45.705024 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705171 | orchestrator | 2026-04-17 06:02:45.705189 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:02:45.705202 | orchestrator | Friday 17 April 2026 06:02:33 +0000 (0:00:00.103) 0:07:36.734 ********** 2026-04-17 06:02:45.705213 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.705225 | orchestrator | 2026-04-17 06:02:45.705236 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:02:45.705247 | orchestrator | Friday 17 April 2026 06:02:34 +0000 (0:00:00.484) 0:07:37.219 ********** 2026-04-17 06:02:45.705259 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705270 | orchestrator | 2026-04-17 06:02:45.705281 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:02:45.705292 | orchestrator | Friday 17 April 2026 06:02:34 +0000 (0:00:00.136) 0:07:37.356 ********** 2026-04-17 06:02:45.705303 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705313 | orchestrator | 2026-04-17 06:02:45.705324 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:02:45.705335 | orchestrator | Friday 17 April 2026 06:02:34 +0000 (0:00:00.109) 0:07:37.465 ********** 2026-04-17 06:02:45.705346 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.705356 | orchestrator | 2026-04-17 06:02:45.705367 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:02:45.705378 | orchestrator | Friday 17 April 2026 06:02:35 +0000 (0:00:00.505) 0:07:37.971 ********** 2026-04-17 06:02:45.705389 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.705399 | orchestrator | 2026-04-17 06:02:45.705410 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:02:45.705421 | orchestrator | Friday 17 April 2026 06:02:35 +0000 (0:00:00.497) 0:07:38.469 ********** 2026-04-17 06:02:45.705431 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705442 | orchestrator | 2026-04-17 06:02:45.705453 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:02:45.705464 | orchestrator | Friday 17 April 2026 06:02:36 +0000 (0:00:00.381) 0:07:38.850 ********** 2026-04-17 06:02:45.705474 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.705485 | orchestrator | 2026-04-17 06:02:45.705496 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:02:45.705507 | orchestrator | Friday 17 April 2026 06:02:36 +0000 (0:00:00.154) 0:07:39.004 ********** 2026-04-17 06:02:45.705542 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705554 | orchestrator | 2026-04-17 06:02:45.705566 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:02:45.705580 | orchestrator | Friday 17 April 2026 06:02:36 +0000 (0:00:00.132) 0:07:39.136 ********** 2026-04-17 06:02:45.705593 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705606 | orchestrator | 2026-04-17 06:02:45.705619 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:02:45.705631 | orchestrator | Friday 17 April 2026 06:02:36 +0000 (0:00:00.114) 0:07:39.251 ********** 2026-04-17 06:02:45.705644 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705664 | orchestrator | 2026-04-17 06:02:45.705683 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:02:45.705701 | orchestrator | Friday 17 April 2026 06:02:36 +0000 (0:00:00.132) 0:07:39.383 ********** 2026-04-17 06:02:45.705719 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705736 | orchestrator | 2026-04-17 06:02:45.705753 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:02:45.705771 | orchestrator | Friday 17 April 2026 06:02:36 +0000 (0:00:00.101) 0:07:39.485 ********** 2026-04-17 06:02:45.705787 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.705807 | orchestrator | 2026-04-17 06:02:45.705828 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:02:45.705847 | orchestrator | Friday 17 April 2026 06:02:36 +0000 (0:00:00.118) 0:07:39.603 ********** 2026-04-17 06:02:45.705865 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.705886 | orchestrator | 2026-04-17 06:02:45.705905 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:02:45.705924 | orchestrator | Friday 17 April 2026 06:02:37 +0000 (0:00:00.149) 0:07:39.753 ********** 2026-04-17 06:02:45.705936 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.705946 | orchestrator | 2026-04-17 06:02:45.705957 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:02:45.705968 | orchestrator | Friday 17 April 2026 06:02:37 +0000 (0:00:00.154) 0:07:39.907 ********** 2026-04-17 06:02:45.705993 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.706005 | orchestrator | 2026-04-17 06:02:45.706098 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:02:45.706117 | orchestrator | Friday 17 April 2026 06:02:37 +0000 (0:00:00.213) 0:07:40.121 ********** 2026-04-17 06:02:45.706128 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706139 | orchestrator | 2026-04-17 06:02:45.706150 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:02:45.706161 | orchestrator | Friday 17 April 2026 06:02:37 +0000 (0:00:00.124) 0:07:40.245 ********** 2026-04-17 06:02:45.706171 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706182 | orchestrator | 2026-04-17 06:02:45.706193 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:02:45.706203 | orchestrator | Friday 17 April 2026 06:02:37 +0000 (0:00:00.122) 0:07:40.368 ********** 2026-04-17 06:02:45.706214 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706225 | orchestrator | 2026-04-17 06:02:45.706235 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:02:45.706246 | orchestrator | Friday 17 April 2026 06:02:38 +0000 (0:00:00.531) 0:07:40.900 ********** 2026-04-17 06:02:45.706257 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706267 | orchestrator | 2026-04-17 06:02:45.706278 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:02:45.706289 | orchestrator | Friday 17 April 2026 06:02:38 +0000 (0:00:00.128) 0:07:41.028 ********** 2026-04-17 06:02:45.706300 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706311 | orchestrator | 2026-04-17 06:02:45.706341 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:02:45.706353 | orchestrator | Friday 17 April 2026 06:02:38 +0000 (0:00:00.150) 0:07:41.178 ********** 2026-04-17 06:02:45.706376 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706387 | orchestrator | 2026-04-17 06:02:45.706398 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:02:45.706408 | orchestrator | Friday 17 April 2026 06:02:38 +0000 (0:00:00.139) 0:07:41.318 ********** 2026-04-17 06:02:45.706419 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706430 | orchestrator | 2026-04-17 06:02:45.706441 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:02:45.706452 | orchestrator | Friday 17 April 2026 06:02:38 +0000 (0:00:00.169) 0:07:41.487 ********** 2026-04-17 06:02:45.706463 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706474 | orchestrator | 2026-04-17 06:02:45.706484 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:02:45.706495 | orchestrator | Friday 17 April 2026 06:02:38 +0000 (0:00:00.148) 0:07:41.636 ********** 2026-04-17 06:02:45.706505 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706516 | orchestrator | 2026-04-17 06:02:45.706527 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:02:45.706537 | orchestrator | Friday 17 April 2026 06:02:39 +0000 (0:00:00.142) 0:07:41.779 ********** 2026-04-17 06:02:45.706548 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706559 | orchestrator | 2026-04-17 06:02:45.706569 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:02:45.706580 | orchestrator | Friday 17 April 2026 06:02:39 +0000 (0:00:00.135) 0:07:41.915 ********** 2026-04-17 06:02:45.706591 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706602 | orchestrator | 2026-04-17 06:02:45.706612 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:02:45.706623 | orchestrator | Friday 17 April 2026 06:02:39 +0000 (0:00:00.166) 0:07:42.082 ********** 2026-04-17 06:02:45.706633 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706644 | orchestrator | 2026-04-17 06:02:45.706655 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:02:45.706666 | orchestrator | Friday 17 April 2026 06:02:39 +0000 (0:00:00.215) 0:07:42.297 ********** 2026-04-17 06:02:45.706676 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.706690 | orchestrator | 2026-04-17 06:02:45.706709 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:02:45.706728 | orchestrator | Friday 17 April 2026 06:02:40 +0000 (0:00:00.957) 0:07:43.255 ********** 2026-04-17 06:02:45.706746 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.706764 | orchestrator | 2026-04-17 06:02:45.706782 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:02:45.706799 | orchestrator | Friday 17 April 2026 06:02:41 +0000 (0:00:01.455) 0:07:44.711 ********** 2026-04-17 06:02:45.706817 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-17 06:02:45.706834 | orchestrator | 2026-04-17 06:02:45.706852 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:02:45.706869 | orchestrator | Friday 17 April 2026 06:02:42 +0000 (0:00:00.645) 0:07:45.356 ********** 2026-04-17 06:02:45.706887 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706905 | orchestrator | 2026-04-17 06:02:45.706924 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:02:45.706942 | orchestrator | Friday 17 April 2026 06:02:42 +0000 (0:00:00.142) 0:07:45.499 ********** 2026-04-17 06:02:45.706959 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.706970 | orchestrator | 2026-04-17 06:02:45.706981 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:02:45.706992 | orchestrator | Friday 17 April 2026 06:02:42 +0000 (0:00:00.143) 0:07:45.642 ********** 2026-04-17 06:02:45.707002 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:02:45.707013 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:02:45.707033 | orchestrator | 2026-04-17 06:02:45.707044 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:02:45.707054 | orchestrator | Friday 17 April 2026 06:02:43 +0000 (0:00:00.865) 0:07:46.508 ********** 2026-04-17 06:02:45.707089 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.707101 | orchestrator | 2026-04-17 06:02:45.707119 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:02:45.707130 | orchestrator | Friday 17 April 2026 06:02:44 +0000 (0:00:00.489) 0:07:46.997 ********** 2026-04-17 06:02:45.707141 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.707152 | orchestrator | 2026-04-17 06:02:45.707163 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:02:45.707173 | orchestrator | Friday 17 April 2026 06:02:44 +0000 (0:00:00.156) 0:07:47.154 ********** 2026-04-17 06:02:45.707184 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.707195 | orchestrator | 2026-04-17 06:02:45.707206 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:02:45.707216 | orchestrator | Friday 17 April 2026 06:02:44 +0000 (0:00:00.140) 0:07:47.295 ********** 2026-04-17 06:02:45.707227 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:45.707238 | orchestrator | 2026-04-17 06:02:45.707248 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:02:45.707259 | orchestrator | Friday 17 April 2026 06:02:44 +0000 (0:00:00.129) 0:07:47.424 ********** 2026-04-17 06:02:45.707270 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-17 06:02:45.707281 | orchestrator | 2026-04-17 06:02:45.707291 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:02:45.707302 | orchestrator | Friday 17 April 2026 06:02:44 +0000 (0:00:00.224) 0:07:47.648 ********** 2026-04-17 06:02:45.707313 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:45.707323 | orchestrator | 2026-04-17 06:02:45.707334 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:02:45.707355 | orchestrator | Friday 17 April 2026 06:02:45 +0000 (0:00:00.792) 0:07:48.441 ********** 2026-04-17 06:02:59.628571 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:02:59.628684 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:02:59.628700 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:02:59.628712 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.628724 | orchestrator | 2026-04-17 06:02:59.628736 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:02:59.628747 | orchestrator | Friday 17 April 2026 06:02:45 +0000 (0:00:00.156) 0:07:48.597 ********** 2026-04-17 06:02:59.628758 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.628769 | orchestrator | 2026-04-17 06:02:59.628780 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:02:59.628791 | orchestrator | Friday 17 April 2026 06:02:45 +0000 (0:00:00.146) 0:07:48.743 ********** 2026-04-17 06:02:59.628802 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.628813 | orchestrator | 2026-04-17 06:02:59.628824 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:02:59.628835 | orchestrator | Friday 17 April 2026 06:02:46 +0000 (0:00:00.554) 0:07:49.298 ********** 2026-04-17 06:02:59.628846 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.628858 | orchestrator | 2026-04-17 06:02:59.628868 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:02:59.628879 | orchestrator | Friday 17 April 2026 06:02:46 +0000 (0:00:00.158) 0:07:49.456 ********** 2026-04-17 06:02:59.628890 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.628900 | orchestrator | 2026-04-17 06:02:59.628911 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:02:59.628943 | orchestrator | Friday 17 April 2026 06:02:46 +0000 (0:00:00.148) 0:07:49.605 ********** 2026-04-17 06:02:59.628954 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.628965 | orchestrator | 2026-04-17 06:02:59.628976 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:02:59.628986 | orchestrator | Friday 17 April 2026 06:02:47 +0000 (0:00:00.154) 0:07:49.760 ********** 2026-04-17 06:02:59.628997 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:59.629008 | orchestrator | 2026-04-17 06:02:59.629019 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:02:59.629029 | orchestrator | Friday 17 April 2026 06:02:48 +0000 (0:00:01.750) 0:07:51.510 ********** 2026-04-17 06:02:59.629040 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:59.629097 | orchestrator | 2026-04-17 06:02:59.629111 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:02:59.629123 | orchestrator | Friday 17 April 2026 06:02:48 +0000 (0:00:00.172) 0:07:51.683 ********** 2026-04-17 06:02:59.629135 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-17 06:02:59.629148 | orchestrator | 2026-04-17 06:02:59.629160 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:02:59.629173 | orchestrator | Friday 17 April 2026 06:02:49 +0000 (0:00:00.242) 0:07:51.926 ********** 2026-04-17 06:02:59.629185 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.629198 | orchestrator | 2026-04-17 06:02:59.629211 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:02:59.629222 | orchestrator | Friday 17 April 2026 06:02:49 +0000 (0:00:00.162) 0:07:52.089 ********** 2026-04-17 06:02:59.629234 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.629246 | orchestrator | 2026-04-17 06:02:59.629259 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:02:59.629270 | orchestrator | Friday 17 April 2026 06:02:49 +0000 (0:00:00.150) 0:07:52.239 ********** 2026-04-17 06:02:59.629283 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.629295 | orchestrator | 2026-04-17 06:02:59.629307 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:02:59.629319 | orchestrator | Friday 17 April 2026 06:02:49 +0000 (0:00:00.168) 0:07:52.407 ********** 2026-04-17 06:02:59.629331 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.629343 | orchestrator | 2026-04-17 06:02:59.629355 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:02:59.629367 | orchestrator | Friday 17 April 2026 06:02:49 +0000 (0:00:00.144) 0:07:52.552 ********** 2026-04-17 06:02:59.629394 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.629406 | orchestrator | 2026-04-17 06:02:59.629419 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:02:59.629431 | orchestrator | Friday 17 April 2026 06:02:49 +0000 (0:00:00.164) 0:07:52.717 ********** 2026-04-17 06:02:59.629444 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.629457 | orchestrator | 2026-04-17 06:02:59.629469 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:02:59.629480 | orchestrator | Friday 17 April 2026 06:02:50 +0000 (0:00:00.535) 0:07:53.252 ********** 2026-04-17 06:02:59.629491 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.629501 | orchestrator | 2026-04-17 06:02:59.629512 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:02:59.629522 | orchestrator | Friday 17 April 2026 06:02:50 +0000 (0:00:00.167) 0:07:53.420 ********** 2026-04-17 06:02:59.629533 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.629544 | orchestrator | 2026-04-17 06:02:59.629555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:02:59.629565 | orchestrator | Friday 17 April 2026 06:02:50 +0000 (0:00:00.156) 0:07:53.576 ********** 2026-04-17 06:02:59.629576 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:02:59.629586 | orchestrator | 2026-04-17 06:02:59.629597 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:02:59.629616 | orchestrator | Friday 17 April 2026 06:02:51 +0000 (0:00:00.240) 0:07:53.817 ********** 2026-04-17 06:02:59.629626 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-17 06:02:59.629638 | orchestrator | 2026-04-17 06:02:59.629649 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:02:59.629677 | orchestrator | Friday 17 April 2026 06:02:51 +0000 (0:00:00.244) 0:07:54.061 ********** 2026-04-17 06:02:59.629688 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-17 06:02:59.629703 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-17 06:02:59.629722 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-17 06:02:59.629740 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-17 06:02:59.629758 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-17 06:02:59.629776 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-17 06:02:59.629793 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-17 06:02:59.629810 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:02:59.629826 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:02:59.629843 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:02:59.629861 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:02:59.629878 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:02:59.629897 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:02:59.629914 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:02:59.629933 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-17 06:02:59.629951 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-17 06:02:59.629969 | orchestrator | 2026-04-17 06:02:59.629981 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:02:59.629992 | orchestrator | Friday 17 April 2026 06:02:57 +0000 (0:00:05.696) 0:07:59.757 ********** 2026-04-17 06:02:59.630002 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630013 | orchestrator | 2026-04-17 06:02:59.630155 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:02:59.630190 | orchestrator | Friday 17 April 2026 06:02:57 +0000 (0:00:00.137) 0:07:59.895 ********** 2026-04-17 06:02:59.630201 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630212 | orchestrator | 2026-04-17 06:02:59.630223 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:02:59.630233 | orchestrator | Friday 17 April 2026 06:02:57 +0000 (0:00:00.137) 0:08:00.032 ********** 2026-04-17 06:02:59.630244 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630255 | orchestrator | 2026-04-17 06:02:59.630265 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:02:59.630276 | orchestrator | Friday 17 April 2026 06:02:57 +0000 (0:00:00.167) 0:08:00.199 ********** 2026-04-17 06:02:59.630287 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630297 | orchestrator | 2026-04-17 06:02:59.630308 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:02:59.630318 | orchestrator | Friday 17 April 2026 06:02:57 +0000 (0:00:00.149) 0:08:00.349 ********** 2026-04-17 06:02:59.630329 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630340 | orchestrator | 2026-04-17 06:02:59.630350 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:02:59.630361 | orchestrator | Friday 17 April 2026 06:02:57 +0000 (0:00:00.143) 0:08:00.492 ********** 2026-04-17 06:02:59.630371 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630382 | orchestrator | 2026-04-17 06:02:59.630392 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:02:59.630414 | orchestrator | Friday 17 April 2026 06:02:58 +0000 (0:00:00.603) 0:08:01.095 ********** 2026-04-17 06:02:59.630425 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630435 | orchestrator | 2026-04-17 06:02:59.630446 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:02:59.630456 | orchestrator | Friday 17 April 2026 06:02:58 +0000 (0:00:00.160) 0:08:01.256 ********** 2026-04-17 06:02:59.630467 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630477 | orchestrator | 2026-04-17 06:02:59.630488 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:02:59.630506 | orchestrator | Friday 17 April 2026 06:02:58 +0000 (0:00:00.140) 0:08:01.397 ********** 2026-04-17 06:02:59.630517 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630528 | orchestrator | 2026-04-17 06:02:59.630538 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:02:59.630549 | orchestrator | Friday 17 April 2026 06:02:58 +0000 (0:00:00.153) 0:08:01.550 ********** 2026-04-17 06:02:59.630559 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630570 | orchestrator | 2026-04-17 06:02:59.630580 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:02:59.630591 | orchestrator | Friday 17 April 2026 06:02:58 +0000 (0:00:00.145) 0:08:01.696 ********** 2026-04-17 06:02:59.630602 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630612 | orchestrator | 2026-04-17 06:02:59.630623 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:02:59.630634 | orchestrator | Friday 17 April 2026 06:02:59 +0000 (0:00:00.145) 0:08:01.842 ********** 2026-04-17 06:02:59.630644 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630655 | orchestrator | 2026-04-17 06:02:59.630665 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:02:59.630676 | orchestrator | Friday 17 April 2026 06:02:59 +0000 (0:00:00.142) 0:08:01.985 ********** 2026-04-17 06:02:59.630687 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630697 | orchestrator | 2026-04-17 06:02:59.630708 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:02:59.630719 | orchestrator | Friday 17 April 2026 06:02:59 +0000 (0:00:00.235) 0:08:02.220 ********** 2026-04-17 06:02:59.630729 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:02:59.630740 | orchestrator | 2026-04-17 06:02:59.630751 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:02:59.630772 | orchestrator | Friday 17 April 2026 06:02:59 +0000 (0:00:00.143) 0:08:02.364 ********** 2026-04-17 06:03:18.890439 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890547 | orchestrator | 2026-04-17 06:03:18.890561 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:03:18.890571 | orchestrator | Friday 17 April 2026 06:02:59 +0000 (0:00:00.257) 0:08:02.621 ********** 2026-04-17 06:03:18.890580 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890588 | orchestrator | 2026-04-17 06:03:18.890598 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:03:18.890606 | orchestrator | Friday 17 April 2026 06:03:00 +0000 (0:00:00.148) 0:08:02.770 ********** 2026-04-17 06:03:18.890615 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890624 | orchestrator | 2026-04-17 06:03:18.890633 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:03:18.890644 | orchestrator | Friday 17 April 2026 06:03:00 +0000 (0:00:00.161) 0:08:02.931 ********** 2026-04-17 06:03:18.890652 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890661 | orchestrator | 2026-04-17 06:03:18.890669 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:03:18.890678 | orchestrator | Friday 17 April 2026 06:03:00 +0000 (0:00:00.162) 0:08:03.094 ********** 2026-04-17 06:03:18.890687 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890716 | orchestrator | 2026-04-17 06:03:18.890726 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:03:18.890734 | orchestrator | Friday 17 April 2026 06:03:00 +0000 (0:00:00.570) 0:08:03.664 ********** 2026-04-17 06:03:18.890743 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890751 | orchestrator | 2026-04-17 06:03:18.890760 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:03:18.890769 | orchestrator | Friday 17 April 2026 06:03:01 +0000 (0:00:00.147) 0:08:03.812 ********** 2026-04-17 06:03:18.890777 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890786 | orchestrator | 2026-04-17 06:03:18.890794 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:03:18.890803 | orchestrator | Friday 17 April 2026 06:03:01 +0000 (0:00:00.148) 0:08:03.960 ********** 2026-04-17 06:03:18.890811 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:03:18.890821 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:03:18.890830 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:03:18.890839 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890848 | orchestrator | 2026-04-17 06:03:18.890856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:03:18.890865 | orchestrator | Friday 17 April 2026 06:03:01 +0000 (0:00:00.443) 0:08:04.404 ********** 2026-04-17 06:03:18.890873 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:03:18.890882 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:03:18.890890 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:03:18.890899 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890908 | orchestrator | 2026-04-17 06:03:18.890916 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:03:18.890925 | orchestrator | Friday 17 April 2026 06:03:02 +0000 (0:00:00.433) 0:08:04.837 ********** 2026-04-17 06:03:18.890933 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:03:18.890942 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:03:18.890950 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:03:18.890959 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.890967 | orchestrator | 2026-04-17 06:03:18.890978 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:03:18.890988 | orchestrator | Friday 17 April 2026 06:03:02 +0000 (0:00:00.464) 0:08:05.302 ********** 2026-04-17 06:03:18.890998 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.891008 | orchestrator | 2026-04-17 06:03:18.891018 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:03:18.891066 | orchestrator | Friday 17 April 2026 06:03:02 +0000 (0:00:00.149) 0:08:05.451 ********** 2026-04-17 06:03:18.891077 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-17 06:03:18.891088 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.891098 | orchestrator | 2026-04-17 06:03:18.891108 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:03:18.891118 | orchestrator | Friday 17 April 2026 06:03:03 +0000 (0:00:00.361) 0:08:05.812 ********** 2026-04-17 06:03:18.891128 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:03:18.891138 | orchestrator | 2026-04-17 06:03:18.891148 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:03:18.891158 | orchestrator | Friday 17 April 2026 06:03:03 +0000 (0:00:00.891) 0:08:06.704 ********** 2026-04-17 06:03:18.891168 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891178 | orchestrator | 2026-04-17 06:03:18.891188 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-17 06:03:18.891198 | orchestrator | Friday 17 April 2026 06:03:04 +0000 (0:00:00.160) 0:08:06.864 ********** 2026-04-17 06:03:18.891208 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-04-17 06:03:18.891225 | orchestrator | 2026-04-17 06:03:18.891235 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-17 06:03:18.891245 | orchestrator | Friday 17 April 2026 06:03:04 +0000 (0:00:00.264) 0:08:07.129 ********** 2026-04-17 06:03:18.891255 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891265 | orchestrator | 2026-04-17 06:03:18.891274 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-17 06:03:18.891284 | orchestrator | Friday 17 April 2026 06:03:07 +0000 (0:00:03.217) 0:08:10.346 ********** 2026-04-17 06:03:18.891295 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.891305 | orchestrator | 2026-04-17 06:03:18.891316 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-17 06:03:18.891340 | orchestrator | Friday 17 April 2026 06:03:07 +0000 (0:00:00.190) 0:08:10.537 ********** 2026-04-17 06:03:18.891349 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891358 | orchestrator | 2026-04-17 06:03:18.891367 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-17 06:03:18.891376 | orchestrator | Friday 17 April 2026 06:03:08 +0000 (0:00:00.244) 0:08:10.781 ********** 2026-04-17 06:03:18.891384 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891393 | orchestrator | 2026-04-17 06:03:18.891401 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-17 06:03:18.891410 | orchestrator | Friday 17 April 2026 06:03:08 +0000 (0:00:00.188) 0:08:10.970 ********** 2026-04-17 06:03:18.891418 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:03:18.891427 | orchestrator | 2026-04-17 06:03:18.891435 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-17 06:03:18.891444 | orchestrator | Friday 17 April 2026 06:03:09 +0000 (0:00:01.080) 0:08:12.050 ********** 2026-04-17 06:03:18.891453 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891461 | orchestrator | 2026-04-17 06:03:18.891470 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-17 06:03:18.891478 | orchestrator | Friday 17 April 2026 06:03:09 +0000 (0:00:00.584) 0:08:12.635 ********** 2026-04-17 06:03:18.891488 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891503 | orchestrator | 2026-04-17 06:03:18.891518 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-17 06:03:18.891527 | orchestrator | Friday 17 April 2026 06:03:10 +0000 (0:00:00.531) 0:08:13.167 ********** 2026-04-17 06:03:18.891536 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891544 | orchestrator | 2026-04-17 06:03:18.891553 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-17 06:03:18.891561 | orchestrator | Friday 17 April 2026 06:03:10 +0000 (0:00:00.510) 0:08:13.677 ********** 2026-04-17 06:03:18.891570 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:03:18.891578 | orchestrator | 2026-04-17 06:03:18.891587 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-17 06:03:18.891595 | orchestrator | Friday 17 April 2026 06:03:11 +0000 (0:00:00.558) 0:08:14.236 ********** 2026-04-17 06:03:18.891603 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:03:18.891612 | orchestrator | 2026-04-17 06:03:18.891620 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-17 06:03:18.891629 | orchestrator | Friday 17 April 2026 06:03:12 +0000 (0:00:00.607) 0:08:14.843 ********** 2026-04-17 06:03:18.891637 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:03:18.891645 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 06:03:18.891654 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-17 06:03:18.891663 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-17 06:03:18.891671 | orchestrator | 2026-04-17 06:03:18.891679 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-17 06:03:18.891688 | orchestrator | Friday 17 April 2026 06:03:14 +0000 (0:00:02.841) 0:08:17.684 ********** 2026-04-17 06:03:18.891702 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:03:18.891711 | orchestrator | 2026-04-17 06:03:18.891719 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-17 06:03:18.891728 | orchestrator | Friday 17 April 2026 06:03:15 +0000 (0:00:01.017) 0:08:18.702 ********** 2026-04-17 06:03:18.891736 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891745 | orchestrator | 2026-04-17 06:03:18.891753 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-17 06:03:18.891762 | orchestrator | Friday 17 April 2026 06:03:16 +0000 (0:00:00.152) 0:08:18.855 ********** 2026-04-17 06:03:18.891770 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891779 | orchestrator | 2026-04-17 06:03:18.891787 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-17 06:03:18.891796 | orchestrator | Friday 17 April 2026 06:03:16 +0000 (0:00:00.562) 0:08:19.417 ********** 2026-04-17 06:03:18.891804 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891813 | orchestrator | 2026-04-17 06:03:18.891821 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-17 06:03:18.891835 | orchestrator | Friday 17 April 2026 06:03:17 +0000 (0:00:00.776) 0:08:20.194 ********** 2026-04-17 06:03:18.891844 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:18.891852 | orchestrator | 2026-04-17 06:03:18.891861 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-17 06:03:18.891869 | orchestrator | Friday 17 April 2026 06:03:17 +0000 (0:00:00.470) 0:08:20.664 ********** 2026-04-17 06:03:18.891878 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.891887 | orchestrator | 2026-04-17 06:03:18.891895 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-17 06:03:18.891904 | orchestrator | Friday 17 April 2026 06:03:18 +0000 (0:00:00.176) 0:08:20.840 ********** 2026-04-17 06:03:18.891912 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-04-17 06:03:18.891926 | orchestrator | 2026-04-17 06:03:18.891940 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-17 06:03:18.891954 | orchestrator | Friday 17 April 2026 06:03:18 +0000 (0:00:00.255) 0:08:21.096 ********** 2026-04-17 06:03:18.891968 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.891982 | orchestrator | 2026-04-17 06:03:18.891996 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-17 06:03:18.892009 | orchestrator | Friday 17 April 2026 06:03:18 +0000 (0:00:00.156) 0:08:21.253 ********** 2026-04-17 06:03:18.892022 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:18.892061 | orchestrator | 2026-04-17 06:03:18.892074 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-17 06:03:18.892087 | orchestrator | Friday 17 April 2026 06:03:18 +0000 (0:00:00.142) 0:08:21.396 ********** 2026-04-17 06:03:18.892101 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-04-17 06:03:18.892114 | orchestrator | 2026-04-17 06:03:18.892128 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-17 06:03:18.892151 | orchestrator | Friday 17 April 2026 06:03:18 +0000 (0:00:00.234) 0:08:21.630 ********** 2026-04-17 06:03:47.756197 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:03:47.756317 | orchestrator | 2026-04-17 06:03:47.756333 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-17 06:03:47.756346 | orchestrator | Friday 17 April 2026 06:03:20 +0000 (0:00:01.427) 0:08:23.058 ********** 2026-04-17 06:03:47.756358 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:47.756369 | orchestrator | 2026-04-17 06:03:47.756380 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-17 06:03:47.756391 | orchestrator | Friday 17 April 2026 06:03:21 +0000 (0:00:00.961) 0:08:24.019 ********** 2026-04-17 06:03:47.756403 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:47.756414 | orchestrator | 2026-04-17 06:03:47.756425 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-17 06:03:47.756459 | orchestrator | Friday 17 April 2026 06:03:22 +0000 (0:00:01.461) 0:08:25.480 ********** 2026-04-17 06:03:47.756471 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:03:47.756482 | orchestrator | 2026-04-17 06:03:47.756492 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-17 06:03:47.756503 | orchestrator | Friday 17 April 2026 06:03:25 +0000 (0:00:02.586) 0:08:28.067 ********** 2026-04-17 06:03:47.756513 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-04-17 06:03:47.756525 | orchestrator | 2026-04-17 06:03:47.756536 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-17 06:03:47.756546 | orchestrator | Friday 17 April 2026 06:03:25 +0000 (0:00:00.245) 0:08:28.312 ********** 2026-04-17 06:03:47.756557 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:47.756567 | orchestrator | 2026-04-17 06:03:47.756578 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-17 06:03:47.756589 | orchestrator | Friday 17 April 2026 06:03:26 +0000 (0:00:01.252) 0:08:29.564 ********** 2026-04-17 06:03:47.756599 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:47.756610 | orchestrator | 2026-04-17 06:03:47.756620 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-17 06:03:47.756631 | orchestrator | Friday 17 April 2026 06:03:28 +0000 (0:00:02.081) 0:08:31.646 ********** 2026-04-17 06:03:47.756642 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.756653 | orchestrator | 2026-04-17 06:03:47.756663 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-17 06:03:47.756674 | orchestrator | Friday 17 April 2026 06:03:29 +0000 (0:00:00.130) 0:08:31.777 ********** 2026-04-17 06:03:47.756687 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-17 06:03:47.756701 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-17 06:03:47.756727 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-17 06:03:47.756742 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-17 06:03:47.756756 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-17 06:03:47.756771 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__feaa8940ae4ec9ad8f14d6912853fa6029ac6abf'}])  2026-04-17 06:03:47.756793 | orchestrator | 2026-04-17 06:03:47.756807 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-17 06:03:47.756838 | orchestrator | Friday 17 April 2026 06:03:38 +0000 (0:00:09.201) 0:08:40.979 ********** 2026-04-17 06:03:47.756852 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:03:47.756864 | orchestrator | 2026-04-17 06:03:47.756875 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:03:47.756886 | orchestrator | Friday 17 April 2026 06:03:39 +0000 (0:00:01.537) 0:08:42.516 ********** 2026-04-17 06:03:47.756897 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:03:47.756908 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-17 06:03:47.756918 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-17 06:03:47.756929 | orchestrator | 2026-04-17 06:03:47.756939 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:03:47.756950 | orchestrator | Friday 17 April 2026 06:03:41 +0000 (0:00:01.353) 0:08:43.869 ********** 2026-04-17 06:03:47.756960 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 06:03:47.756972 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 06:03:47.756982 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 06:03:47.756993 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757032 | orchestrator | 2026-04-17 06:03:47.757052 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-17 06:03:47.757071 | orchestrator | Friday 17 April 2026 06:03:41 +0000 (0:00:00.489) 0:08:44.358 ********** 2026-04-17 06:03:47.757089 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757107 | orchestrator | 2026-04-17 06:03:47.757119 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-17 06:03:47.757130 | orchestrator | Friday 17 April 2026 06:03:41 +0000 (0:00:00.151) 0:08:44.510 ********** 2026-04-17 06:03:47.757140 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:47.757151 | orchestrator | 2026-04-17 06:03:47.757162 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 06:03:47.757172 | orchestrator | Friday 17 April 2026 06:03:44 +0000 (0:00:02.340) 0:08:46.851 ********** 2026-04-17 06:03:47.757183 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757193 | orchestrator | 2026-04-17 06:03:47.757204 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 06:03:47.757214 | orchestrator | Friday 17 April 2026 06:03:44 +0000 (0:00:00.154) 0:08:47.005 ********** 2026-04-17 06:03:47.757225 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757236 | orchestrator | 2026-04-17 06:03:47.757246 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 06:03:47.757257 | orchestrator | Friday 17 April 2026 06:03:44 +0000 (0:00:00.134) 0:08:47.140 ********** 2026-04-17 06:03:47.757267 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757278 | orchestrator | 2026-04-17 06:03:47.757289 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 06:03:47.757299 | orchestrator | Friday 17 April 2026 06:03:44 +0000 (0:00:00.138) 0:08:47.278 ********** 2026-04-17 06:03:47.757310 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757321 | orchestrator | 2026-04-17 06:03:47.757331 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 06:03:47.757342 | orchestrator | Friday 17 April 2026 06:03:44 +0000 (0:00:00.141) 0:08:47.420 ********** 2026-04-17 06:03:47.757352 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757363 | orchestrator | 2026-04-17 06:03:47.757373 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-17 06:03:47.757384 | orchestrator | Friday 17 April 2026 06:03:44 +0000 (0:00:00.138) 0:08:47.558 ********** 2026-04-17 06:03:47.757404 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757415 | orchestrator | 2026-04-17 06:03:47.757425 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 06:03:47.757436 | orchestrator | Friday 17 April 2026 06:03:44 +0000 (0:00:00.142) 0:08:47.701 ********** 2026-04-17 06:03:47.757446 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:03:47.757457 | orchestrator | 2026-04-17 06:03:47.757467 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-04-17 06:03:47.757484 | orchestrator | 2026-04-17 06:03:47.757495 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-04-17 06:03:47.757506 | orchestrator | Friday 17 April 2026 06:03:45 +0000 (0:00:00.775) 0:08:48.476 ********** 2026-04-17 06:03:47.757516 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:03:47.757527 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:03:47.757538 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:03:47.757548 | orchestrator | 2026-04-17 06:03:47.757559 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-17 06:03:47.757569 | orchestrator | 2026-04-17 06:03:47.757580 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-17 06:03:47.757590 | orchestrator | Friday 17 April 2026 06:03:46 +0000 (0:00:01.191) 0:08:49.668 ********** 2026-04-17 06:03:47.757601 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:47.757611 | orchestrator | 2026-04-17 06:03:47.757622 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:03:47.757632 | orchestrator | Friday 17 April 2026 06:03:47 +0000 (0:00:00.255) 0:08:49.924 ********** 2026-04-17 06:03:47.757643 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:47.757653 | orchestrator | 2026-04-17 06:03:47.757664 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:03:47.757674 | orchestrator | Friday 17 April 2026 06:03:47 +0000 (0:00:00.213) 0:08:50.137 ********** 2026-04-17 06:03:47.757685 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:47.757695 | orchestrator | 2026-04-17 06:03:47.757706 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:03:47.757717 | orchestrator | Friday 17 April 2026 06:03:47 +0000 (0:00:00.135) 0:08:50.273 ********** 2026-04-17 06:03:47.757727 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:47.757738 | orchestrator | 2026-04-17 06:03:47.757748 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:03:47.757759 | orchestrator | Friday 17 April 2026 06:03:47 +0000 (0:00:00.152) 0:08:50.426 ********** 2026-04-17 06:03:47.757777 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.326601 | orchestrator | 2026-04-17 06:03:55.326711 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:03:55.326730 | orchestrator | Friday 17 April 2026 06:03:47 +0000 (0:00:00.161) 0:08:50.588 ********** 2026-04-17 06:03:55.326743 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.326755 | orchestrator | 2026-04-17 06:03:55.326767 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:03:55.326785 | orchestrator | Friday 17 April 2026 06:03:47 +0000 (0:00:00.140) 0:08:50.729 ********** 2026-04-17 06:03:55.326804 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.326822 | orchestrator | 2026-04-17 06:03:55.326841 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:03:55.326859 | orchestrator | Friday 17 April 2026 06:03:48 +0000 (0:00:00.150) 0:08:50.880 ********** 2026-04-17 06:03:55.326876 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.326895 | orchestrator | 2026-04-17 06:03:55.326913 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:03:55.326931 | orchestrator | Friday 17 April 2026 06:03:48 +0000 (0:00:00.138) 0:08:51.018 ********** 2026-04-17 06:03:55.326950 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.326968 | orchestrator | 2026-04-17 06:03:55.327047 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:03:55.327083 | orchestrator | Friday 17 April 2026 06:03:48 +0000 (0:00:00.145) 0:08:51.164 ********** 2026-04-17 06:03:55.327095 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327105 | orchestrator | 2026-04-17 06:03:55.327116 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:03:55.327127 | orchestrator | Friday 17 April 2026 06:03:48 +0000 (0:00:00.147) 0:08:51.312 ********** 2026-04-17 06:03:55.327140 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327154 | orchestrator | 2026-04-17 06:03:55.327167 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:03:55.327180 | orchestrator | Friday 17 April 2026 06:03:49 +0000 (0:00:00.529) 0:08:51.841 ********** 2026-04-17 06:03:55.327192 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327204 | orchestrator | 2026-04-17 06:03:55.327217 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:03:55.327229 | orchestrator | Friday 17 April 2026 06:03:49 +0000 (0:00:00.226) 0:08:52.068 ********** 2026-04-17 06:03:55.327242 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327254 | orchestrator | 2026-04-17 06:03:55.327267 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:03:55.327280 | orchestrator | Friday 17 April 2026 06:03:49 +0000 (0:00:00.145) 0:08:52.214 ********** 2026-04-17 06:03:55.327293 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327305 | orchestrator | 2026-04-17 06:03:55.327318 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:03:55.327331 | orchestrator | Friday 17 April 2026 06:03:49 +0000 (0:00:00.134) 0:08:52.348 ********** 2026-04-17 06:03:55.327343 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327356 | orchestrator | 2026-04-17 06:03:55.327369 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:03:55.327381 | orchestrator | Friday 17 April 2026 06:03:49 +0000 (0:00:00.192) 0:08:52.541 ********** 2026-04-17 06:03:55.327394 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327406 | orchestrator | 2026-04-17 06:03:55.327418 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:03:55.327431 | orchestrator | Friday 17 April 2026 06:03:49 +0000 (0:00:00.146) 0:08:52.687 ********** 2026-04-17 06:03:55.327443 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327455 | orchestrator | 2026-04-17 06:03:55.327469 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:03:55.327482 | orchestrator | Friday 17 April 2026 06:03:50 +0000 (0:00:00.154) 0:08:52.841 ********** 2026-04-17 06:03:55.327493 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327504 | orchestrator | 2026-04-17 06:03:55.327515 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:03:55.327541 | orchestrator | Friday 17 April 2026 06:03:50 +0000 (0:00:00.140) 0:08:52.982 ********** 2026-04-17 06:03:55.327553 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327564 | orchestrator | 2026-04-17 06:03:55.327575 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:03:55.327587 | orchestrator | Friday 17 April 2026 06:03:50 +0000 (0:00:00.148) 0:08:53.130 ********** 2026-04-17 06:03:55.327598 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327608 | orchestrator | 2026-04-17 06:03:55.327619 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:03:55.327630 | orchestrator | Friday 17 April 2026 06:03:50 +0000 (0:00:00.141) 0:08:53.271 ********** 2026-04-17 06:03:55.327641 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327652 | orchestrator | 2026-04-17 06:03:55.327662 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:03:55.327673 | orchestrator | Friday 17 April 2026 06:03:50 +0000 (0:00:00.141) 0:08:53.413 ********** 2026-04-17 06:03:55.327684 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327694 | orchestrator | 2026-04-17 06:03:55.327705 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:03:55.327723 | orchestrator | Friday 17 April 2026 06:03:50 +0000 (0:00:00.142) 0:08:53.556 ********** 2026-04-17 06:03:55.327733 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327744 | orchestrator | 2026-04-17 06:03:55.327755 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:03:55.327765 | orchestrator | Friday 17 April 2026 06:03:51 +0000 (0:00:00.547) 0:08:54.103 ********** 2026-04-17 06:03:55.327776 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327787 | orchestrator | 2026-04-17 06:03:55.327797 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:03:55.327808 | orchestrator | Friday 17 April 2026 06:03:51 +0000 (0:00:00.273) 0:08:54.377 ********** 2026-04-17 06:03:55.327819 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327829 | orchestrator | 2026-04-17 06:03:55.327859 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:03:55.327871 | orchestrator | Friday 17 April 2026 06:03:51 +0000 (0:00:00.149) 0:08:54.527 ********** 2026-04-17 06:03:55.327882 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327893 | orchestrator | 2026-04-17 06:03:55.327904 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:03:55.327914 | orchestrator | Friday 17 April 2026 06:03:51 +0000 (0:00:00.149) 0:08:54.676 ********** 2026-04-17 06:03:55.327925 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.327936 | orchestrator | 2026-04-17 06:03:55.327953 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:03:55.327971 | orchestrator | Friday 17 April 2026 06:03:52 +0000 (0:00:00.188) 0:08:54.864 ********** 2026-04-17 06:03:55.327990 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328030 | orchestrator | 2026-04-17 06:03:55.328049 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:03:55.328066 | orchestrator | Friday 17 April 2026 06:03:52 +0000 (0:00:00.150) 0:08:55.014 ********** 2026-04-17 06:03:55.328083 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328101 | orchestrator | 2026-04-17 06:03:55.328118 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:03:55.328137 | orchestrator | Friday 17 April 2026 06:03:52 +0000 (0:00:00.159) 0:08:55.174 ********** 2026-04-17 06:03:55.328155 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328173 | orchestrator | 2026-04-17 06:03:55.328191 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:03:55.328209 | orchestrator | Friday 17 April 2026 06:03:52 +0000 (0:00:00.140) 0:08:55.314 ********** 2026-04-17 06:03:55.328224 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328243 | orchestrator | 2026-04-17 06:03:55.328261 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:03:55.328279 | orchestrator | Friday 17 April 2026 06:03:52 +0000 (0:00:00.143) 0:08:55.458 ********** 2026-04-17 06:03:55.328298 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328316 | orchestrator | 2026-04-17 06:03:55.328333 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:03:55.328352 | orchestrator | Friday 17 April 2026 06:03:52 +0000 (0:00:00.228) 0:08:55.686 ********** 2026-04-17 06:03:55.328371 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328389 | orchestrator | 2026-04-17 06:03:55.328408 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:03:55.328419 | orchestrator | Friday 17 April 2026 06:03:53 +0000 (0:00:00.155) 0:08:55.842 ********** 2026-04-17 06:03:55.328430 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328441 | orchestrator | 2026-04-17 06:03:55.328451 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:03:55.328462 | orchestrator | Friday 17 April 2026 06:03:53 +0000 (0:00:00.516) 0:08:56.358 ********** 2026-04-17 06:03:55.328472 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328483 | orchestrator | 2026-04-17 06:03:55.328493 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:03:55.328515 | orchestrator | Friday 17 April 2026 06:03:53 +0000 (0:00:00.145) 0:08:56.504 ********** 2026-04-17 06:03:55.328525 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328542 | orchestrator | 2026-04-17 06:03:55.328560 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:03:55.328577 | orchestrator | Friday 17 April 2026 06:03:53 +0000 (0:00:00.148) 0:08:56.652 ********** 2026-04-17 06:03:55.328594 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328612 | orchestrator | 2026-04-17 06:03:55.328629 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:03:55.328646 | orchestrator | Friday 17 April 2026 06:03:54 +0000 (0:00:00.187) 0:08:56.840 ********** 2026-04-17 06:03:55.328664 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328684 | orchestrator | 2026-04-17 06:03:55.328701 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:03:55.328718 | orchestrator | Friday 17 April 2026 06:03:54 +0000 (0:00:00.145) 0:08:56.985 ********** 2026-04-17 06:03:55.328738 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328749 | orchestrator | 2026-04-17 06:03:55.328760 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:03:55.328772 | orchestrator | Friday 17 April 2026 06:03:54 +0000 (0:00:00.163) 0:08:57.148 ********** 2026-04-17 06:03:55.328783 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328793 | orchestrator | 2026-04-17 06:03:55.328804 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:03:55.328815 | orchestrator | Friday 17 April 2026 06:03:54 +0000 (0:00:00.168) 0:08:57.316 ********** 2026-04-17 06:03:55.328825 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328836 | orchestrator | 2026-04-17 06:03:55.328846 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:03:55.328857 | orchestrator | Friday 17 April 2026 06:03:54 +0000 (0:00:00.132) 0:08:57.449 ********** 2026-04-17 06:03:55.328868 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328878 | orchestrator | 2026-04-17 06:03:55.328889 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:03:55.328899 | orchestrator | Friday 17 April 2026 06:03:54 +0000 (0:00:00.149) 0:08:57.599 ********** 2026-04-17 06:03:55.328910 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328920 | orchestrator | 2026-04-17 06:03:55.328931 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:03:55.328941 | orchestrator | Friday 17 April 2026 06:03:54 +0000 (0:00:00.144) 0:08:57.743 ********** 2026-04-17 06:03:55.328952 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.328963 | orchestrator | 2026-04-17 06:03:55.328973 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:03:55.328984 | orchestrator | Friday 17 April 2026 06:03:55 +0000 (0:00:00.158) 0:08:57.902 ********** 2026-04-17 06:03:55.329033 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:03:55.329046 | orchestrator | 2026-04-17 06:03:55.329069 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:04:05.301370 | orchestrator | Friday 17 April 2026 06:03:55 +0000 (0:00:00.163) 0:08:58.066 ********** 2026-04-17 06:04:05.301479 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301495 | orchestrator | 2026-04-17 06:04:05.301509 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:04:05.301520 | orchestrator | Friday 17 April 2026 06:03:55 +0000 (0:00:00.249) 0:08:58.315 ********** 2026-04-17 06:04:05.301531 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301542 | orchestrator | 2026-04-17 06:04:05.301554 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:04:05.301565 | orchestrator | Friday 17 April 2026 06:03:55 +0000 (0:00:00.140) 0:08:58.456 ********** 2026-04-17 06:04:05.301576 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301611 | orchestrator | 2026-04-17 06:04:05.301623 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:04:05.301634 | orchestrator | Friday 17 April 2026 06:03:56 +0000 (0:00:01.083) 0:08:59.540 ********** 2026-04-17 06:04:05.301644 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301655 | orchestrator | 2026-04-17 06:04:05.301666 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:04:05.301677 | orchestrator | Friday 17 April 2026 06:03:56 +0000 (0:00:00.143) 0:08:59.683 ********** 2026-04-17 06:04:05.301687 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301698 | orchestrator | 2026-04-17 06:04:05.301709 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:04:05.301721 | orchestrator | Friday 17 April 2026 06:03:57 +0000 (0:00:00.161) 0:08:59.845 ********** 2026-04-17 06:04:05.301732 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301743 | orchestrator | 2026-04-17 06:04:05.301754 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:04:05.301765 | orchestrator | Friday 17 April 2026 06:03:57 +0000 (0:00:00.157) 0:09:00.002 ********** 2026-04-17 06:04:05.301775 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301786 | orchestrator | 2026-04-17 06:04:05.301797 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:04:05.301807 | orchestrator | Friday 17 April 2026 06:03:57 +0000 (0:00:00.140) 0:09:00.142 ********** 2026-04-17 06:04:05.301818 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301829 | orchestrator | 2026-04-17 06:04:05.301840 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:04:05.301850 | orchestrator | Friday 17 April 2026 06:03:57 +0000 (0:00:00.142) 0:09:00.284 ********** 2026-04-17 06:04:05.301861 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301872 | orchestrator | 2026-04-17 06:04:05.301882 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:04:05.301893 | orchestrator | Friday 17 April 2026 06:03:57 +0000 (0:00:00.178) 0:09:00.463 ********** 2026-04-17 06:04:05.301905 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 06:04:05.301919 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 06:04:05.301932 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 06:04:05.301944 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.301956 | orchestrator | 2026-04-17 06:04:05.301970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:04:05.301982 | orchestrator | Friday 17 April 2026 06:03:58 +0000 (0:00:00.463) 0:09:00.926 ********** 2026-04-17 06:04:05.302080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 06:04:05.302095 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 06:04:05.302108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 06:04:05.302121 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302133 | orchestrator | 2026-04-17 06:04:05.302146 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:04:05.302174 | orchestrator | Friday 17 April 2026 06:03:58 +0000 (0:00:00.417) 0:09:01.344 ********** 2026-04-17 06:04:05.302187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 06:04:05.302201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 06:04:05.302214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 06:04:05.302226 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302238 | orchestrator | 2026-04-17 06:04:05.302252 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:04:05.302265 | orchestrator | Friday 17 April 2026 06:03:59 +0000 (0:00:00.463) 0:09:01.807 ********** 2026-04-17 06:04:05.302289 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302321 | orchestrator | 2026-04-17 06:04:05.302332 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:04:05.302343 | orchestrator | Friday 17 April 2026 06:03:59 +0000 (0:00:00.180) 0:09:01.987 ********** 2026-04-17 06:04:05.302355 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-17 06:04:05.302365 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302376 | orchestrator | 2026-04-17 06:04:05.302387 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:04:05.302398 | orchestrator | Friday 17 April 2026 06:03:59 +0000 (0:00:00.310) 0:09:02.298 ********** 2026-04-17 06:04:05.302409 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302419 | orchestrator | 2026-04-17 06:04:05.302430 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:04:05.302441 | orchestrator | Friday 17 April 2026 06:04:00 +0000 (0:00:00.634) 0:09:02.933 ********** 2026-04-17 06:04:05.302451 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 06:04:05.302462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 06:04:05.302473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 06:04:05.302483 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302494 | orchestrator | 2026-04-17 06:04:05.302505 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-17 06:04:05.302534 | orchestrator | Friday 17 April 2026 06:04:00 +0000 (0:00:00.478) 0:09:03.411 ********** 2026-04-17 06:04:05.302545 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302556 | orchestrator | 2026-04-17 06:04:05.302567 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-17 06:04:05.302578 | orchestrator | Friday 17 April 2026 06:04:00 +0000 (0:00:00.124) 0:09:03.535 ********** 2026-04-17 06:04:05.302589 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302600 | orchestrator | 2026-04-17 06:04:05.302610 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-17 06:04:05.302621 | orchestrator | Friday 17 April 2026 06:04:00 +0000 (0:00:00.155) 0:09:03.691 ********** 2026-04-17 06:04:05.302631 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302642 | orchestrator | 2026-04-17 06:04:05.302653 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-17 06:04:05.302663 | orchestrator | Friday 17 April 2026 06:04:01 +0000 (0:00:00.180) 0:09:03.872 ********** 2026-04-17 06:04:05.302674 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:05.302684 | orchestrator | 2026-04-17 06:04:05.302695 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-17 06:04:05.302706 | orchestrator | 2026-04-17 06:04:05.302716 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-17 06:04:05.302727 | orchestrator | Friday 17 April 2026 06:04:01 +0000 (0:00:00.670) 0:09:04.542 ********** 2026-04-17 06:04:05.302738 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.302749 | orchestrator | 2026-04-17 06:04:05.302759 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:04:05.302770 | orchestrator | Friday 17 April 2026 06:04:02 +0000 (0:00:00.267) 0:09:04.810 ********** 2026-04-17 06:04:05.302781 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.302791 | orchestrator | 2026-04-17 06:04:05.302802 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:04:05.302813 | orchestrator | Friday 17 April 2026 06:04:02 +0000 (0:00:00.235) 0:09:05.045 ********** 2026-04-17 06:04:05.302824 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.302834 | orchestrator | 2026-04-17 06:04:05.302845 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:04:05.302855 | orchestrator | Friday 17 April 2026 06:04:02 +0000 (0:00:00.143) 0:09:05.189 ********** 2026-04-17 06:04:05.302866 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.302877 | orchestrator | 2026-04-17 06:04:05.302887 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:04:05.302906 | orchestrator | Friday 17 April 2026 06:04:02 +0000 (0:00:00.530) 0:09:05.719 ********** 2026-04-17 06:04:05.302916 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.302927 | orchestrator | 2026-04-17 06:04:05.302938 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:04:05.302948 | orchestrator | Friday 17 April 2026 06:04:03 +0000 (0:00:00.164) 0:09:05.884 ********** 2026-04-17 06:04:05.302959 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.302970 | orchestrator | 2026-04-17 06:04:05.302980 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:04:05.303008 | orchestrator | Friday 17 April 2026 06:04:03 +0000 (0:00:00.149) 0:09:06.034 ********** 2026-04-17 06:04:05.303019 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303030 | orchestrator | 2026-04-17 06:04:05.303041 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:04:05.303052 | orchestrator | Friday 17 April 2026 06:04:03 +0000 (0:00:00.160) 0:09:06.194 ********** 2026-04-17 06:04:05.303062 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303073 | orchestrator | 2026-04-17 06:04:05.303084 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:04:05.303094 | orchestrator | Friday 17 April 2026 06:04:03 +0000 (0:00:00.165) 0:09:06.360 ********** 2026-04-17 06:04:05.303105 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303116 | orchestrator | 2026-04-17 06:04:05.303127 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:04:05.303143 | orchestrator | Friday 17 April 2026 06:04:03 +0000 (0:00:00.149) 0:09:06.509 ********** 2026-04-17 06:04:05.303153 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303164 | orchestrator | 2026-04-17 06:04:05.303175 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:04:05.303186 | orchestrator | Friday 17 April 2026 06:04:03 +0000 (0:00:00.159) 0:09:06.669 ********** 2026-04-17 06:04:05.303196 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303207 | orchestrator | 2026-04-17 06:04:05.303218 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:04:05.303228 | orchestrator | Friday 17 April 2026 06:04:04 +0000 (0:00:00.147) 0:09:06.817 ********** 2026-04-17 06:04:05.303239 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303250 | orchestrator | 2026-04-17 06:04:05.303260 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:04:05.303271 | orchestrator | Friday 17 April 2026 06:04:04 +0000 (0:00:00.214) 0:09:07.031 ********** 2026-04-17 06:04:05.303282 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303293 | orchestrator | 2026-04-17 06:04:05.303304 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:04:05.303314 | orchestrator | Friday 17 April 2026 06:04:04 +0000 (0:00:00.147) 0:09:07.179 ********** 2026-04-17 06:04:05.303325 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303336 | orchestrator | 2026-04-17 06:04:05.303347 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:04:05.303358 | orchestrator | Friday 17 April 2026 06:04:04 +0000 (0:00:00.129) 0:09:07.308 ********** 2026-04-17 06:04:05.303368 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303379 | orchestrator | 2026-04-17 06:04:05.303390 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:04:05.303401 | orchestrator | Friday 17 April 2026 06:04:04 +0000 (0:00:00.138) 0:09:07.446 ********** 2026-04-17 06:04:05.303411 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:05.303422 | orchestrator | 2026-04-17 06:04:05.303439 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:04:12.947611 | orchestrator | Friday 17 April 2026 06:04:05 +0000 (0:00:00.596) 0:09:08.043 ********** 2026-04-17 06:04:12.947723 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.947737 | orchestrator | 2026-04-17 06:04:12.947747 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:04:12.947775 | orchestrator | Friday 17 April 2026 06:04:05 +0000 (0:00:00.127) 0:09:08.171 ********** 2026-04-17 06:04:12.947785 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.947805 | orchestrator | 2026-04-17 06:04:12.947814 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:04:12.947823 | orchestrator | Friday 17 April 2026 06:04:05 +0000 (0:00:00.135) 0:09:08.306 ********** 2026-04-17 06:04:12.947832 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.947841 | orchestrator | 2026-04-17 06:04:12.947850 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:04:12.947859 | orchestrator | Friday 17 April 2026 06:04:05 +0000 (0:00:00.162) 0:09:08.469 ********** 2026-04-17 06:04:12.947868 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.947877 | orchestrator | 2026-04-17 06:04:12.947885 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:04:12.947894 | orchestrator | Friday 17 April 2026 06:04:05 +0000 (0:00:00.170) 0:09:08.639 ********** 2026-04-17 06:04:12.947903 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.947913 | orchestrator | 2026-04-17 06:04:12.947921 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:04:12.947930 | orchestrator | Friday 17 April 2026 06:04:06 +0000 (0:00:00.146) 0:09:08.785 ********** 2026-04-17 06:04:12.947939 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.947947 | orchestrator | 2026-04-17 06:04:12.947956 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:04:12.947965 | orchestrator | Friday 17 April 2026 06:04:06 +0000 (0:00:00.145) 0:09:08.931 ********** 2026-04-17 06:04:12.947974 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948030 | orchestrator | 2026-04-17 06:04:12.948040 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:04:12.948048 | orchestrator | Friday 17 April 2026 06:04:06 +0000 (0:00:00.137) 0:09:09.068 ********** 2026-04-17 06:04:12.948057 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948066 | orchestrator | 2026-04-17 06:04:12.948075 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:04:12.948084 | orchestrator | Friday 17 April 2026 06:04:06 +0000 (0:00:00.219) 0:09:09.288 ********** 2026-04-17 06:04:12.948092 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948101 | orchestrator | 2026-04-17 06:04:12.948110 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:04:12.948118 | orchestrator | Friday 17 April 2026 06:04:06 +0000 (0:00:00.180) 0:09:09.468 ********** 2026-04-17 06:04:12.948127 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948135 | orchestrator | 2026-04-17 06:04:12.948144 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:04:12.948155 | orchestrator | Friday 17 April 2026 06:04:06 +0000 (0:00:00.144) 0:09:09.612 ********** 2026-04-17 06:04:12.948164 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948175 | orchestrator | 2026-04-17 06:04:12.948185 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:04:12.948195 | orchestrator | Friday 17 April 2026 06:04:07 +0000 (0:00:00.139) 0:09:09.752 ********** 2026-04-17 06:04:12.948205 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948215 | orchestrator | 2026-04-17 06:04:12.948225 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:04:12.948235 | orchestrator | Friday 17 April 2026 06:04:07 +0000 (0:00:00.156) 0:09:09.909 ********** 2026-04-17 06:04:12.948244 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948254 | orchestrator | 2026-04-17 06:04:12.948264 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:04:12.948287 | orchestrator | Friday 17 April 2026 06:04:07 +0000 (0:00:00.599) 0:09:10.508 ********** 2026-04-17 06:04:12.948324 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948335 | orchestrator | 2026-04-17 06:04:12.948353 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:04:12.948363 | orchestrator | Friday 17 April 2026 06:04:07 +0000 (0:00:00.174) 0:09:10.683 ********** 2026-04-17 06:04:12.948374 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948384 | orchestrator | 2026-04-17 06:04:12.948393 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:04:12.948404 | orchestrator | Friday 17 April 2026 06:04:08 +0000 (0:00:00.182) 0:09:10.865 ********** 2026-04-17 06:04:12.948415 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948425 | orchestrator | 2026-04-17 06:04:12.948435 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:04:12.948445 | orchestrator | Friday 17 April 2026 06:04:08 +0000 (0:00:00.220) 0:09:11.085 ********** 2026-04-17 06:04:12.948456 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948466 | orchestrator | 2026-04-17 06:04:12.948476 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:04:12.948486 | orchestrator | Friday 17 April 2026 06:04:08 +0000 (0:00:00.152) 0:09:11.238 ********** 2026-04-17 06:04:12.948497 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948508 | orchestrator | 2026-04-17 06:04:12.948519 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:04:12.948529 | orchestrator | Friday 17 April 2026 06:04:08 +0000 (0:00:00.143) 0:09:11.381 ********** 2026-04-17 06:04:12.948538 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948546 | orchestrator | 2026-04-17 06:04:12.948555 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:04:12.948564 | orchestrator | Friday 17 April 2026 06:04:08 +0000 (0:00:00.159) 0:09:11.541 ********** 2026-04-17 06:04:12.948572 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948581 | orchestrator | 2026-04-17 06:04:12.948590 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:04:12.948614 | orchestrator | Friday 17 April 2026 06:04:08 +0000 (0:00:00.134) 0:09:11.676 ********** 2026-04-17 06:04:12.948624 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948632 | orchestrator | 2026-04-17 06:04:12.948641 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:04:12.948650 | orchestrator | Friday 17 April 2026 06:04:09 +0000 (0:00:00.153) 0:09:11.830 ********** 2026-04-17 06:04:12.948658 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948667 | orchestrator | 2026-04-17 06:04:12.948675 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:04:12.948684 | orchestrator | Friday 17 April 2026 06:04:09 +0000 (0:00:00.170) 0:09:12.001 ********** 2026-04-17 06:04:12.948693 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948701 | orchestrator | 2026-04-17 06:04:12.948710 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:04:12.948720 | orchestrator | Friday 17 April 2026 06:04:09 +0000 (0:00:00.144) 0:09:12.146 ********** 2026-04-17 06:04:12.948728 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948737 | orchestrator | 2026-04-17 06:04:12.948745 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:04:12.948754 | orchestrator | Friday 17 April 2026 06:04:09 +0000 (0:00:00.144) 0:09:12.290 ********** 2026-04-17 06:04:12.948763 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948771 | orchestrator | 2026-04-17 06:04:12.948780 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:04:12.948789 | orchestrator | Friday 17 April 2026 06:04:10 +0000 (0:00:00.470) 0:09:12.761 ********** 2026-04-17 06:04:12.948798 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948807 | orchestrator | 2026-04-17 06:04:12.948815 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:04:12.948824 | orchestrator | Friday 17 April 2026 06:04:10 +0000 (0:00:00.142) 0:09:12.903 ********** 2026-04-17 06:04:12.948838 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948847 | orchestrator | 2026-04-17 06:04:12.948856 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:04:12.948864 | orchestrator | Friday 17 April 2026 06:04:10 +0000 (0:00:00.175) 0:09:13.079 ********** 2026-04-17 06:04:12.948873 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948881 | orchestrator | 2026-04-17 06:04:12.948890 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:04:12.948899 | orchestrator | Friday 17 April 2026 06:04:10 +0000 (0:00:00.138) 0:09:13.218 ********** 2026-04-17 06:04:12.948908 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948916 | orchestrator | 2026-04-17 06:04:12.948925 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:04:12.948934 | orchestrator | Friday 17 April 2026 06:04:10 +0000 (0:00:00.142) 0:09:13.361 ********** 2026-04-17 06:04:12.948942 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.948951 | orchestrator | 2026-04-17 06:04:12.948959 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:04:12.948968 | orchestrator | Friday 17 April 2026 06:04:10 +0000 (0:00:00.259) 0:09:13.620 ********** 2026-04-17 06:04:12.948991 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949001 | orchestrator | 2026-04-17 06:04:12.949009 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:04:12.949018 | orchestrator | Friday 17 April 2026 06:04:11 +0000 (0:00:00.132) 0:09:13.752 ********** 2026-04-17 06:04:12.949027 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949036 | orchestrator | 2026-04-17 06:04:12.949044 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:04:12.949053 | orchestrator | Friday 17 April 2026 06:04:11 +0000 (0:00:00.262) 0:09:14.015 ********** 2026-04-17 06:04:12.949062 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949070 | orchestrator | 2026-04-17 06:04:12.949079 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:04:12.949092 | orchestrator | Friday 17 April 2026 06:04:11 +0000 (0:00:00.144) 0:09:14.160 ********** 2026-04-17 06:04:12.949101 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949110 | orchestrator | 2026-04-17 06:04:12.949119 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:04:12.949129 | orchestrator | Friday 17 April 2026 06:04:11 +0000 (0:00:00.138) 0:09:14.299 ********** 2026-04-17 06:04:12.949137 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949146 | orchestrator | 2026-04-17 06:04:12.949154 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:04:12.949163 | orchestrator | Friday 17 April 2026 06:04:11 +0000 (0:00:00.159) 0:09:14.458 ********** 2026-04-17 06:04:12.949172 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949181 | orchestrator | 2026-04-17 06:04:12.949189 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:04:12.949198 | orchestrator | Friday 17 April 2026 06:04:11 +0000 (0:00:00.150) 0:09:14.608 ********** 2026-04-17 06:04:12.949206 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949215 | orchestrator | 2026-04-17 06:04:12.949224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:04:12.949232 | orchestrator | Friday 17 April 2026 06:04:12 +0000 (0:00:00.142) 0:09:14.751 ********** 2026-04-17 06:04:12.949241 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949250 | orchestrator | 2026-04-17 06:04:12.949258 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:04:12.949267 | orchestrator | Friday 17 April 2026 06:04:12 +0000 (0:00:00.504) 0:09:15.255 ********** 2026-04-17 06:04:12.949275 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:04:12.949285 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:04:12.949293 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:04:12.949307 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:12.949316 | orchestrator | 2026-04-17 06:04:12.949331 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:04:21.556198 | orchestrator | Friday 17 April 2026 06:04:12 +0000 (0:00:00.434) 0:09:15.690 ********** 2026-04-17 06:04:21.556312 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:04:21.556325 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:04:21.556334 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:04:21.556341 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556349 | orchestrator | 2026-04-17 06:04:21.556358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:04:21.556365 | orchestrator | Friday 17 April 2026 06:04:13 +0000 (0:00:00.438) 0:09:16.129 ********** 2026-04-17 06:04:21.556372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:04:21.556380 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:04:21.556387 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:04:21.556394 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556402 | orchestrator | 2026-04-17 06:04:21.556409 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:04:21.556416 | orchestrator | Friday 17 April 2026 06:04:13 +0000 (0:00:00.445) 0:09:16.574 ********** 2026-04-17 06:04:21.556423 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556430 | orchestrator | 2026-04-17 06:04:21.556438 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:04:21.556448 | orchestrator | Friday 17 April 2026 06:04:13 +0000 (0:00:00.144) 0:09:16.719 ********** 2026-04-17 06:04:21.556462 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-17 06:04:21.556474 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556488 | orchestrator | 2026-04-17 06:04:21.556501 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:04:21.556512 | orchestrator | Friday 17 April 2026 06:04:14 +0000 (0:00:00.357) 0:09:17.076 ********** 2026-04-17 06:04:21.556519 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556526 | orchestrator | 2026-04-17 06:04:21.556533 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:04:21.556541 | orchestrator | Friday 17 April 2026 06:04:14 +0000 (0:00:00.246) 0:09:17.323 ********** 2026-04-17 06:04:21.556548 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 06:04:21.556555 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 06:04:21.556563 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 06:04:21.556570 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556577 | orchestrator | 2026-04-17 06:04:21.556584 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-17 06:04:21.556591 | orchestrator | Friday 17 April 2026 06:04:15 +0000 (0:00:00.453) 0:09:17.776 ********** 2026-04-17 06:04:21.556598 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556606 | orchestrator | 2026-04-17 06:04:21.556613 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-17 06:04:21.556620 | orchestrator | Friday 17 April 2026 06:04:15 +0000 (0:00:00.149) 0:09:17.926 ********** 2026-04-17 06:04:21.556627 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556634 | orchestrator | 2026-04-17 06:04:21.556642 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-17 06:04:21.556649 | orchestrator | Friday 17 April 2026 06:04:15 +0000 (0:00:00.141) 0:09:18.068 ********** 2026-04-17 06:04:21.556656 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556663 | orchestrator | 2026-04-17 06:04:21.556670 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-17 06:04:21.556677 | orchestrator | Friday 17 April 2026 06:04:15 +0000 (0:00:00.166) 0:09:18.235 ********** 2026-04-17 06:04:21.556703 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:04:21.556711 | orchestrator | 2026-04-17 06:04:21.556718 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-17 06:04:21.556725 | orchestrator | 2026-04-17 06:04:21.556745 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-17 06:04:21.556754 | orchestrator | Friday 17 April 2026 06:04:16 +0000 (0:00:00.982) 0:09:19.217 ********** 2026-04-17 06:04:21.556762 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.556770 | orchestrator | 2026-04-17 06:04:21.556778 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:04:21.556786 | orchestrator | Friday 17 April 2026 06:04:16 +0000 (0:00:00.242) 0:09:19.460 ********** 2026-04-17 06:04:21.556795 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.556803 | orchestrator | 2026-04-17 06:04:21.556811 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:04:21.556819 | orchestrator | Friday 17 April 2026 06:04:16 +0000 (0:00:00.220) 0:09:19.681 ********** 2026-04-17 06:04:21.556827 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.556836 | orchestrator | 2026-04-17 06:04:21.556844 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:04:21.556852 | orchestrator | Friday 17 April 2026 06:04:17 +0000 (0:00:00.147) 0:09:19.828 ********** 2026-04-17 06:04:21.556861 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.556869 | orchestrator | 2026-04-17 06:04:21.556876 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:04:21.556884 | orchestrator | Friday 17 April 2026 06:04:17 +0000 (0:00:00.136) 0:09:19.964 ********** 2026-04-17 06:04:21.556893 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.556900 | orchestrator | 2026-04-17 06:04:21.556908 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:04:21.556917 | orchestrator | Friday 17 April 2026 06:04:17 +0000 (0:00:00.150) 0:09:20.115 ********** 2026-04-17 06:04:21.556925 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.556933 | orchestrator | 2026-04-17 06:04:21.556941 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:04:21.556949 | orchestrator | Friday 17 April 2026 06:04:17 +0000 (0:00:00.146) 0:09:20.261 ********** 2026-04-17 06:04:21.556957 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.556966 | orchestrator | 2026-04-17 06:04:21.557005 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:04:21.557015 | orchestrator | Friday 17 April 2026 06:04:17 +0000 (0:00:00.143) 0:09:20.405 ********** 2026-04-17 06:04:21.557023 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557031 | orchestrator | 2026-04-17 06:04:21.557040 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:04:21.557049 | orchestrator | Friday 17 April 2026 06:04:17 +0000 (0:00:00.160) 0:09:20.566 ********** 2026-04-17 06:04:21.557057 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557065 | orchestrator | 2026-04-17 06:04:21.557073 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:04:21.557081 | orchestrator | Friday 17 April 2026 06:04:17 +0000 (0:00:00.131) 0:09:20.697 ********** 2026-04-17 06:04:21.557089 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557097 | orchestrator | 2026-04-17 06:04:21.557105 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:04:21.557113 | orchestrator | Friday 17 April 2026 06:04:18 +0000 (0:00:00.156) 0:09:20.854 ********** 2026-04-17 06:04:21.557121 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557128 | orchestrator | 2026-04-17 06:04:21.557135 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:04:21.557142 | orchestrator | Friday 17 April 2026 06:04:18 +0000 (0:00:00.473) 0:09:21.328 ********** 2026-04-17 06:04:21.557149 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557157 | orchestrator | 2026-04-17 06:04:21.557170 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:04:21.557177 | orchestrator | Friday 17 April 2026 06:04:18 +0000 (0:00:00.207) 0:09:21.535 ********** 2026-04-17 06:04:21.557184 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557191 | orchestrator | 2026-04-17 06:04:21.557198 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:04:21.557205 | orchestrator | Friday 17 April 2026 06:04:18 +0000 (0:00:00.145) 0:09:21.681 ********** 2026-04-17 06:04:21.557212 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557219 | orchestrator | 2026-04-17 06:04:21.557226 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:04:21.557233 | orchestrator | Friday 17 April 2026 06:04:19 +0000 (0:00:00.135) 0:09:21.816 ********** 2026-04-17 06:04:21.557240 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557247 | orchestrator | 2026-04-17 06:04:21.557255 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:04:21.557262 | orchestrator | Friday 17 April 2026 06:04:19 +0000 (0:00:00.168) 0:09:21.985 ********** 2026-04-17 06:04:21.557269 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557276 | orchestrator | 2026-04-17 06:04:21.557283 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:04:21.557290 | orchestrator | Friday 17 April 2026 06:04:19 +0000 (0:00:00.149) 0:09:22.134 ********** 2026-04-17 06:04:21.557297 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557304 | orchestrator | 2026-04-17 06:04:21.557311 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:04:21.557319 | orchestrator | Friday 17 April 2026 06:04:19 +0000 (0:00:00.138) 0:09:22.273 ********** 2026-04-17 06:04:21.557326 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557333 | orchestrator | 2026-04-17 06:04:21.557340 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:04:21.557347 | orchestrator | Friday 17 April 2026 06:04:19 +0000 (0:00:00.141) 0:09:22.414 ********** 2026-04-17 06:04:21.557354 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557361 | orchestrator | 2026-04-17 06:04:21.557368 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:04:21.557376 | orchestrator | Friday 17 April 2026 06:04:19 +0000 (0:00:00.137) 0:09:22.551 ********** 2026-04-17 06:04:21.557383 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557390 | orchestrator | 2026-04-17 06:04:21.557397 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:04:21.557408 | orchestrator | Friday 17 April 2026 06:04:19 +0000 (0:00:00.142) 0:09:22.694 ********** 2026-04-17 06:04:21.557415 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557422 | orchestrator | 2026-04-17 06:04:21.557429 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:04:21.557436 | orchestrator | Friday 17 April 2026 06:04:20 +0000 (0:00:00.186) 0:09:22.880 ********** 2026-04-17 06:04:21.557443 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557450 | orchestrator | 2026-04-17 06:04:21.557457 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:04:21.557464 | orchestrator | Friday 17 April 2026 06:04:20 +0000 (0:00:00.170) 0:09:23.051 ********** 2026-04-17 06:04:21.557472 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557479 | orchestrator | 2026-04-17 06:04:21.557486 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:04:21.557493 | orchestrator | Friday 17 April 2026 06:04:20 +0000 (0:00:00.548) 0:09:23.600 ********** 2026-04-17 06:04:21.557500 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557507 | orchestrator | 2026-04-17 06:04:21.557514 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:04:21.557521 | orchestrator | Friday 17 April 2026 06:04:21 +0000 (0:00:00.227) 0:09:23.828 ********** 2026-04-17 06:04:21.557528 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557541 | orchestrator | 2026-04-17 06:04:21.557549 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:04:21.557556 | orchestrator | Friday 17 April 2026 06:04:21 +0000 (0:00:00.166) 0:09:23.994 ********** 2026-04-17 06:04:21.557563 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557570 | orchestrator | 2026-04-17 06:04:21.557577 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:04:21.557584 | orchestrator | Friday 17 April 2026 06:04:21 +0000 (0:00:00.143) 0:09:24.138 ********** 2026-04-17 06:04:21.557591 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:21.557598 | orchestrator | 2026-04-17 06:04:21.557606 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:04:21.557617 | orchestrator | Friday 17 April 2026 06:04:21 +0000 (0:00:00.160) 0:09:24.298 ********** 2026-04-17 06:04:30.225155 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225271 | orchestrator | 2026-04-17 06:04:30.225290 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:04:30.225303 | orchestrator | Friday 17 April 2026 06:04:21 +0000 (0:00:00.152) 0:09:24.451 ********** 2026-04-17 06:04:30.225315 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225326 | orchestrator | 2026-04-17 06:04:30.225337 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:04:30.225348 | orchestrator | Friday 17 April 2026 06:04:21 +0000 (0:00:00.165) 0:09:24.617 ********** 2026-04-17 06:04:30.225363 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225383 | orchestrator | 2026-04-17 06:04:30.225401 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:04:30.225419 | orchestrator | Friday 17 April 2026 06:04:22 +0000 (0:00:00.181) 0:09:24.798 ********** 2026-04-17 06:04:30.225435 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225453 | orchestrator | 2026-04-17 06:04:30.225469 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:04:30.225486 | orchestrator | Friday 17 April 2026 06:04:22 +0000 (0:00:00.186) 0:09:24.985 ********** 2026-04-17 06:04:30.225504 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225521 | orchestrator | 2026-04-17 06:04:30.225540 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:04:30.225560 | orchestrator | Friday 17 April 2026 06:04:22 +0000 (0:00:00.265) 0:09:25.251 ********** 2026-04-17 06:04:30.225579 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225598 | orchestrator | 2026-04-17 06:04:30.225618 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:04:30.225637 | orchestrator | Friday 17 April 2026 06:04:22 +0000 (0:00:00.145) 0:09:25.396 ********** 2026-04-17 06:04:30.225655 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225674 | orchestrator | 2026-04-17 06:04:30.225692 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:04:30.225712 | orchestrator | Friday 17 April 2026 06:04:23 +0000 (0:00:00.507) 0:09:25.904 ********** 2026-04-17 06:04:30.225733 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225752 | orchestrator | 2026-04-17 06:04:30.225770 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:04:30.225783 | orchestrator | Friday 17 April 2026 06:04:23 +0000 (0:00:00.138) 0:09:26.042 ********** 2026-04-17 06:04:30.225796 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225810 | orchestrator | 2026-04-17 06:04:30.225824 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:04:30.225837 | orchestrator | Friday 17 April 2026 06:04:23 +0000 (0:00:00.138) 0:09:26.180 ********** 2026-04-17 06:04:30.225850 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225862 | orchestrator | 2026-04-17 06:04:30.225874 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:04:30.225887 | orchestrator | Friday 17 April 2026 06:04:23 +0000 (0:00:00.162) 0:09:26.343 ********** 2026-04-17 06:04:30.225900 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225936 | orchestrator | 2026-04-17 06:04:30.225947 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:04:30.225957 | orchestrator | Friday 17 April 2026 06:04:23 +0000 (0:00:00.145) 0:09:26.489 ********** 2026-04-17 06:04:30.225968 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.225979 | orchestrator | 2026-04-17 06:04:30.225989 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:04:30.226001 | orchestrator | Friday 17 April 2026 06:04:23 +0000 (0:00:00.148) 0:09:26.637 ********** 2026-04-17 06:04:30.226012 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226120 | orchestrator | 2026-04-17 06:04:30.226131 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:04:30.226157 | orchestrator | Friday 17 April 2026 06:04:24 +0000 (0:00:00.162) 0:09:26.799 ********** 2026-04-17 06:04:30.226168 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226179 | orchestrator | 2026-04-17 06:04:30.226190 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:04:30.226200 | orchestrator | Friday 17 April 2026 06:04:24 +0000 (0:00:00.145) 0:09:26.945 ********** 2026-04-17 06:04:30.226211 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226221 | orchestrator | 2026-04-17 06:04:30.226232 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:04:30.226243 | orchestrator | Friday 17 April 2026 06:04:24 +0000 (0:00:00.142) 0:09:27.087 ********** 2026-04-17 06:04:30.226253 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226265 | orchestrator | 2026-04-17 06:04:30.226275 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:04:30.226286 | orchestrator | Friday 17 April 2026 06:04:24 +0000 (0:00:00.147) 0:09:27.235 ********** 2026-04-17 06:04:30.226297 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226307 | orchestrator | 2026-04-17 06:04:30.226318 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:04:30.226329 | orchestrator | Friday 17 April 2026 06:04:24 +0000 (0:00:00.163) 0:09:27.399 ********** 2026-04-17 06:04:30.226339 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226350 | orchestrator | 2026-04-17 06:04:30.226361 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:04:30.226372 | orchestrator | Friday 17 April 2026 06:04:24 +0000 (0:00:00.131) 0:09:27.531 ********** 2026-04-17 06:04:30.226382 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226393 | orchestrator | 2026-04-17 06:04:30.226404 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:04:30.226414 | orchestrator | Friday 17 April 2026 06:04:25 +0000 (0:00:00.251) 0:09:27.783 ********** 2026-04-17 06:04:30.226425 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226436 | orchestrator | 2026-04-17 06:04:30.226447 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:04:30.226479 | orchestrator | Friday 17 April 2026 06:04:25 +0000 (0:00:00.166) 0:09:27.949 ********** 2026-04-17 06:04:30.226490 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226501 | orchestrator | 2026-04-17 06:04:30.226511 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:04:30.226522 | orchestrator | Friday 17 April 2026 06:04:26 +0000 (0:00:01.038) 0:09:28.987 ********** 2026-04-17 06:04:30.226532 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226543 | orchestrator | 2026-04-17 06:04:30.226554 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:04:30.226564 | orchestrator | Friday 17 April 2026 06:04:26 +0000 (0:00:00.175) 0:09:29.162 ********** 2026-04-17 06:04:30.226575 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226586 | orchestrator | 2026-04-17 06:04:30.226597 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:04:30.226618 | orchestrator | Friday 17 April 2026 06:04:26 +0000 (0:00:00.148) 0:09:29.311 ********** 2026-04-17 06:04:30.226629 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226639 | orchestrator | 2026-04-17 06:04:30.226650 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:04:30.226660 | orchestrator | Friday 17 April 2026 06:04:26 +0000 (0:00:00.135) 0:09:29.447 ********** 2026-04-17 06:04:30.226671 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226681 | orchestrator | 2026-04-17 06:04:30.226692 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:04:30.226702 | orchestrator | Friday 17 April 2026 06:04:26 +0000 (0:00:00.136) 0:09:29.584 ********** 2026-04-17 06:04:30.226713 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226730 | orchestrator | 2026-04-17 06:04:30.226748 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:04:30.226765 | orchestrator | Friday 17 April 2026 06:04:26 +0000 (0:00:00.154) 0:09:29.738 ********** 2026-04-17 06:04:30.226783 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226800 | orchestrator | 2026-04-17 06:04:30.226817 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:04:30.226836 | orchestrator | Friday 17 April 2026 06:04:27 +0000 (0:00:00.146) 0:09:29.884 ********** 2026-04-17 06:04:30.226855 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:04:30.226876 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:04:30.226894 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:04:30.226912 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226924 | orchestrator | 2026-04-17 06:04:30.226934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:04:30.226945 | orchestrator | Friday 17 April 2026 06:04:27 +0000 (0:00:00.439) 0:09:30.323 ********** 2026-04-17 06:04:30.226955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:04:30.226966 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:04:30.226977 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:04:30.226987 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.226998 | orchestrator | 2026-04-17 06:04:30.227009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:04:30.227072 | orchestrator | Friday 17 April 2026 06:04:28 +0000 (0:00:00.433) 0:09:30.757 ********** 2026-04-17 06:04:30.227085 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:04:30.227096 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:04:30.227107 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:04:30.227117 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.227128 | orchestrator | 2026-04-17 06:04:30.227140 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:04:30.227168 | orchestrator | Friday 17 April 2026 06:04:28 +0000 (0:00:00.416) 0:09:31.174 ********** 2026-04-17 06:04:30.227189 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.227210 | orchestrator | 2026-04-17 06:04:30.227230 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:04:30.227249 | orchestrator | Friday 17 April 2026 06:04:28 +0000 (0:00:00.121) 0:09:31.295 ********** 2026-04-17 06:04:30.227267 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-17 06:04:30.227287 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.227308 | orchestrator | 2026-04-17 06:04:30.227329 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:04:30.227348 | orchestrator | Friday 17 April 2026 06:04:28 +0000 (0:00:00.335) 0:09:31.631 ********** 2026-04-17 06:04:30.227361 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.227372 | orchestrator | 2026-04-17 06:04:30.227383 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:04:30.227404 | orchestrator | Friday 17 April 2026 06:04:29 +0000 (0:00:00.554) 0:09:32.186 ********** 2026-04-17 06:04:30.227415 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 06:04:30.227426 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 06:04:30.227437 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 06:04:30.227447 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.227458 | orchestrator | 2026-04-17 06:04:30.227469 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-17 06:04:30.227479 | orchestrator | Friday 17 April 2026 06:04:29 +0000 (0:00:00.491) 0:09:32.677 ********** 2026-04-17 06:04:30.227490 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.227501 | orchestrator | 2026-04-17 06:04:30.227511 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-17 06:04:30.227522 | orchestrator | Friday 17 April 2026 06:04:30 +0000 (0:00:00.135) 0:09:32.813 ********** 2026-04-17 06:04:30.227533 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:30.227543 | orchestrator | 2026-04-17 06:04:30.227554 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-17 06:04:30.227575 | orchestrator | Friday 17 April 2026 06:04:30 +0000 (0:00:00.153) 0:09:32.967 ********** 2026-04-17 06:04:53.298717 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:53.298858 | orchestrator | 2026-04-17 06:04:53.298883 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-17 06:04:53.298902 | orchestrator | Friday 17 April 2026 06:04:30 +0000 (0:00:00.162) 0:09:33.130 ********** 2026-04-17 06:04:53.298920 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:04:53.298937 | orchestrator | 2026-04-17 06:04:53.298956 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-17 06:04:53.298974 | orchestrator | 2026-04-17 06:04:53.298991 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-17 06:04:53.299009 | orchestrator | Friday 17 April 2026 06:04:31 +0000 (0:00:00.615) 0:09:33.745 ********** 2026-04-17 06:04:53.299027 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:04:53.299046 | orchestrator | 2026-04-17 06:04:53.299065 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-17 06:04:53.299082 | orchestrator | Friday 17 April 2026 06:04:42 +0000 (0:00:11.897) 0:09:45.643 ********** 2026-04-17 06:04:53.299100 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:04:53.299117 | orchestrator | 2026-04-17 06:04:53.299134 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:04:53.299150 | orchestrator | Friday 17 April 2026 06:04:44 +0000 (0:00:01.520) 0:09:47.164 ********** 2026-04-17 06:04:53.299217 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-17 06:04:53.299237 | orchestrator | 2026-04-17 06:04:53.299256 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:04:53.299276 | orchestrator | Friday 17 April 2026 06:04:44 +0000 (0:00:00.249) 0:09:47.413 ********** 2026-04-17 06:04:53.299296 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:53.299316 | orchestrator | 2026-04-17 06:04:53.299335 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:04:53.299354 | orchestrator | Friday 17 April 2026 06:04:45 +0000 (0:00:00.818) 0:09:48.231 ********** 2026-04-17 06:04:53.299374 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:53.299393 | orchestrator | 2026-04-17 06:04:53.299413 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:04:53.299432 | orchestrator | Friday 17 April 2026 06:04:45 +0000 (0:00:00.156) 0:09:48.388 ********** 2026-04-17 06:04:53.299452 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:53.299472 | orchestrator | 2026-04-17 06:04:53.299492 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:04:53.299509 | orchestrator | Friday 17 April 2026 06:04:46 +0000 (0:00:00.471) 0:09:48.859 ********** 2026-04-17 06:04:53.299527 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:53.299581 | orchestrator | 2026-04-17 06:04:53.299602 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:04:53.299622 | orchestrator | Friday 17 April 2026 06:04:46 +0000 (0:00:00.142) 0:09:49.002 ********** 2026-04-17 06:04:53.299640 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:53.299660 | orchestrator | 2026-04-17 06:04:53.299680 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:04:53.299699 | orchestrator | Friday 17 April 2026 06:04:46 +0000 (0:00:00.155) 0:09:49.157 ********** 2026-04-17 06:04:53.299718 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:53.299737 | orchestrator | 2026-04-17 06:04:53.299756 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:04:53.299778 | orchestrator | Friday 17 April 2026 06:04:46 +0000 (0:00:00.170) 0:09:49.327 ********** 2026-04-17 06:04:53.299798 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:53.299817 | orchestrator | 2026-04-17 06:04:53.299835 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:04:53.299854 | orchestrator | Friday 17 April 2026 06:04:46 +0000 (0:00:00.149) 0:09:49.477 ********** 2026-04-17 06:04:53.299871 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:53.299889 | orchestrator | 2026-04-17 06:04:53.299906 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:04:53.299944 | orchestrator | Friday 17 April 2026 06:04:46 +0000 (0:00:00.156) 0:09:49.633 ********** 2026-04-17 06:04:53.299963 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:04:53.299983 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:04:53.299995 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:04:53.300006 | orchestrator | 2026-04-17 06:04:53.300017 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:04:53.300027 | orchestrator | Friday 17 April 2026 06:04:48 +0000 (0:00:01.134) 0:09:50.768 ********** 2026-04-17 06:04:53.300038 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:53.300049 | orchestrator | 2026-04-17 06:04:53.300059 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:04:53.300070 | orchestrator | Friday 17 April 2026 06:04:48 +0000 (0:00:00.260) 0:09:51.029 ********** 2026-04-17 06:04:53.300081 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:04:53.300092 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:04:53.300102 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:04:53.300113 | orchestrator | 2026-04-17 06:04:53.300124 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:04:53.300134 | orchestrator | Friday 17 April 2026 06:04:50 +0000 (0:00:02.320) 0:09:53.350 ********** 2026-04-17 06:04:53.300145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 06:04:53.300156 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 06:04:53.300192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 06:04:53.300204 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:53.300214 | orchestrator | 2026-04-17 06:04:53.300225 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:04:53.300236 | orchestrator | Friday 17 April 2026 06:04:51 +0000 (0:00:00.855) 0:09:54.205 ********** 2026-04-17 06:04:53.300272 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:04:53.300292 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:04:53.300341 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:04:53.300362 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:53.300380 | orchestrator | 2026-04-17 06:04:53.300400 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:04:53.300418 | orchestrator | Friday 17 April 2026 06:04:52 +0000 (0:00:01.132) 0:09:55.337 ********** 2026-04-17 06:04:53.300438 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:53.300463 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:53.300485 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:53.300506 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:53.300525 | orchestrator | 2026-04-17 06:04:53.300546 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:04:53.300567 | orchestrator | Friday 17 April 2026 06:04:53 +0000 (0:00:00.554) 0:09:55.891 ********** 2026-04-17 06:04:53.300589 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:04:48.824390', 'end': '2026-04-17 06:04:48.859104', 'delta': '0:00:00.034714', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:04:53.300614 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:04:49.782168', 'end': '2026-04-17 06:04:49.824978', 'delta': '0:00:00.042810', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:04:53.300651 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:04:50.383250', 'end': '2026-04-17 06:04:50.436644', 'delta': '0:00:00.053394', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:04:57.290138 | orchestrator | 2026-04-17 06:04:57.290273 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:04:57.290291 | orchestrator | Friday 17 April 2026 06:04:53 +0000 (0:00:00.251) 0:09:56.143 ********** 2026-04-17 06:04:57.290303 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:57.290316 | orchestrator | 2026-04-17 06:04:57.290327 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:04:57.290338 | orchestrator | Friday 17 April 2026 06:04:53 +0000 (0:00:00.260) 0:09:56.404 ********** 2026-04-17 06:04:57.290350 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290361 | orchestrator | 2026-04-17 06:04:57.290372 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:04:57.290383 | orchestrator | Friday 17 April 2026 06:04:53 +0000 (0:00:00.275) 0:09:56.679 ********** 2026-04-17 06:04:57.290393 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:57.290404 | orchestrator | 2026-04-17 06:04:57.290415 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:04:57.290426 | orchestrator | Friday 17 April 2026 06:04:54 +0000 (0:00:00.160) 0:09:56.840 ********** 2026-04-17 06:04:57.290437 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:57.290447 | orchestrator | 2026-04-17 06:04:57.290458 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:04:57.290469 | orchestrator | Friday 17 April 2026 06:04:55 +0000 (0:00:00.970) 0:09:57.811 ********** 2026-04-17 06:04:57.290480 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:04:57.290490 | orchestrator | 2026-04-17 06:04:57.290501 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:04:57.290559 | orchestrator | Friday 17 April 2026 06:04:55 +0000 (0:00:00.188) 0:09:57.999 ********** 2026-04-17 06:04:57.290572 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290583 | orchestrator | 2026-04-17 06:04:57.290594 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:04:57.290608 | orchestrator | Friday 17 April 2026 06:04:55 +0000 (0:00:00.137) 0:09:58.136 ********** 2026-04-17 06:04:57.290620 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290634 | orchestrator | 2026-04-17 06:04:57.290647 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:04:57.290660 | orchestrator | Friday 17 April 2026 06:04:55 +0000 (0:00:00.284) 0:09:58.421 ********** 2026-04-17 06:04:57.290673 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290685 | orchestrator | 2026-04-17 06:04:57.290698 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:04:57.290710 | orchestrator | Friday 17 April 2026 06:04:55 +0000 (0:00:00.187) 0:09:58.608 ********** 2026-04-17 06:04:57.290723 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290735 | orchestrator | 2026-04-17 06:04:57.290748 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:04:57.290761 | orchestrator | Friday 17 April 2026 06:04:56 +0000 (0:00:00.149) 0:09:58.758 ********** 2026-04-17 06:04:57.290774 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290787 | orchestrator | 2026-04-17 06:04:57.290805 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:04:57.290818 | orchestrator | Friday 17 April 2026 06:04:56 +0000 (0:00:00.143) 0:09:58.901 ********** 2026-04-17 06:04:57.290832 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290844 | orchestrator | 2026-04-17 06:04:57.290857 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:04:57.290890 | orchestrator | Friday 17 April 2026 06:04:56 +0000 (0:00:00.138) 0:09:59.040 ********** 2026-04-17 06:04:57.290903 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290920 | orchestrator | 2026-04-17 06:04:57.290939 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:04:57.290957 | orchestrator | Friday 17 April 2026 06:04:56 +0000 (0:00:00.545) 0:09:59.585 ********** 2026-04-17 06:04:57.290977 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.290997 | orchestrator | 2026-04-17 06:04:57.291016 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:04:57.291033 | orchestrator | Friday 17 April 2026 06:04:56 +0000 (0:00:00.152) 0:09:59.738 ********** 2026-04-17 06:04:57.291044 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.291054 | orchestrator | 2026-04-17 06:04:57.291065 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:04:57.291076 | orchestrator | Friday 17 April 2026 06:04:57 +0000 (0:00:00.149) 0:09:59.887 ********** 2026-04-17 06:04:57.291089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:04:57.291103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:04:57.291134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:04:57.291147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:04:57.291161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:04:57.291172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:04:57.291184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:04:57.291266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1d6df01d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:04:57.569536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:04:57.569609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:04:57.569617 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:04:57.569623 | orchestrator | 2026-04-17 06:04:57.569628 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:04:57.569634 | orchestrator | Friday 17 April 2026 06:04:57 +0000 (0:00:00.274) 0:10:00.162 ********** 2026-04-17 06:04:57.569641 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:57.569674 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:57.569680 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:57.569686 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:57.569704 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:57.569709 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:57.569714 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:57.569728 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1d6df01d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:04:57.569740 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:05:09.216053 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:05:09.216156 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.216174 | orchestrator | 2026-04-17 06:05:09.216187 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:05:09.216217 | orchestrator | Friday 17 April 2026 06:04:57 +0000 (0:00:00.277) 0:10:00.439 ********** 2026-04-17 06:05:09.216229 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:09.216241 | orchestrator | 2026-04-17 06:05:09.216252 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:05:09.216314 | orchestrator | Friday 17 April 2026 06:04:58 +0000 (0:00:00.548) 0:10:00.987 ********** 2026-04-17 06:05:09.216326 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:09.216337 | orchestrator | 2026-04-17 06:05:09.216348 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:05:09.216359 | orchestrator | Friday 17 April 2026 06:04:58 +0000 (0:00:00.138) 0:10:01.125 ********** 2026-04-17 06:05:09.216369 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:09.216380 | orchestrator | 2026-04-17 06:05:09.216391 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:05:09.216401 | orchestrator | Friday 17 April 2026 06:04:58 +0000 (0:00:00.487) 0:10:01.613 ********** 2026-04-17 06:05:09.216412 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.216423 | orchestrator | 2026-04-17 06:05:09.216433 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:05:09.216444 | orchestrator | Friday 17 April 2026 06:04:59 +0000 (0:00:00.142) 0:10:01.755 ********** 2026-04-17 06:05:09.216455 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.216465 | orchestrator | 2026-04-17 06:05:09.216476 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:05:09.216500 | orchestrator | Friday 17 April 2026 06:04:59 +0000 (0:00:00.259) 0:10:02.015 ********** 2026-04-17 06:05:09.216511 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.216522 | orchestrator | 2026-04-17 06:05:09.216533 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:05:09.216543 | orchestrator | Friday 17 April 2026 06:04:59 +0000 (0:00:00.167) 0:10:02.183 ********** 2026-04-17 06:05:09.216554 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:05:09.216565 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 06:05:09.216576 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 06:05:09.216587 | orchestrator | 2026-04-17 06:05:09.216598 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:05:09.216611 | orchestrator | Friday 17 April 2026 06:05:00 +0000 (0:00:01.147) 0:10:03.330 ********** 2026-04-17 06:05:09.216623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 06:05:09.216636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 06:05:09.216649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 06:05:09.216662 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.216675 | orchestrator | 2026-04-17 06:05:09.216687 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:05:09.216700 | orchestrator | Friday 17 April 2026 06:05:00 +0000 (0:00:00.198) 0:10:03.528 ********** 2026-04-17 06:05:09.216713 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.216725 | orchestrator | 2026-04-17 06:05:09.216738 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:05:09.216751 | orchestrator | Friday 17 April 2026 06:05:00 +0000 (0:00:00.130) 0:10:03.659 ********** 2026-04-17 06:05:09.216763 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:05:09.216775 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:05:09.216788 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:05:09.216800 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:05:09.216813 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:05:09.216825 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:05:09.216847 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:05:09.216860 | orchestrator | 2026-04-17 06:05:09.216871 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:05:09.216881 | orchestrator | Friday 17 April 2026 06:05:02 +0000 (0:00:01.586) 0:10:05.246 ********** 2026-04-17 06:05:09.216892 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:05:09.216903 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:05:09.216913 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:05:09.216924 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:05:09.216951 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:05:09.216963 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:05:09.216973 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:05:09.216984 | orchestrator | 2026-04-17 06:05:09.216995 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:05:09.217005 | orchestrator | Friday 17 April 2026 06:05:04 +0000 (0:00:01.755) 0:10:07.002 ********** 2026-04-17 06:05:09.217016 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-17 06:05:09.217027 | orchestrator | 2026-04-17 06:05:09.217038 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:05:09.217049 | orchestrator | Friday 17 April 2026 06:05:04 +0000 (0:00:00.243) 0:10:07.245 ********** 2026-04-17 06:05:09.217059 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-17 06:05:09.217070 | orchestrator | 2026-04-17 06:05:09.217080 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:05:09.217091 | orchestrator | Friday 17 April 2026 06:05:04 +0000 (0:00:00.230) 0:10:07.476 ********** 2026-04-17 06:05:09.217102 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:09.217112 | orchestrator | 2026-04-17 06:05:09.217123 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:05:09.217133 | orchestrator | Friday 17 April 2026 06:05:05 +0000 (0:00:00.542) 0:10:08.019 ********** 2026-04-17 06:05:09.217144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217155 | orchestrator | 2026-04-17 06:05:09.217165 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:05:09.217176 | orchestrator | Friday 17 April 2026 06:05:05 +0000 (0:00:00.187) 0:10:08.206 ********** 2026-04-17 06:05:09.217187 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217197 | orchestrator | 2026-04-17 06:05:09.217208 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:05:09.217219 | orchestrator | Friday 17 April 2026 06:05:05 +0000 (0:00:00.132) 0:10:08.339 ********** 2026-04-17 06:05:09.217229 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217240 | orchestrator | 2026-04-17 06:05:09.217250 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:05:09.217278 | orchestrator | Friday 17 April 2026 06:05:05 +0000 (0:00:00.169) 0:10:08.508 ********** 2026-04-17 06:05:09.217290 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:09.217301 | orchestrator | 2026-04-17 06:05:09.217316 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:05:09.217327 | orchestrator | Friday 17 April 2026 06:05:06 +0000 (0:00:00.552) 0:10:09.061 ********** 2026-04-17 06:05:09.217338 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217348 | orchestrator | 2026-04-17 06:05:09.217359 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:05:09.217370 | orchestrator | Friday 17 April 2026 06:05:06 +0000 (0:00:00.145) 0:10:09.207 ********** 2026-04-17 06:05:09.217388 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217399 | orchestrator | 2026-04-17 06:05:09.217409 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:05:09.217420 | orchestrator | Friday 17 April 2026 06:05:06 +0000 (0:00:00.505) 0:10:09.712 ********** 2026-04-17 06:05:09.217431 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:09.217441 | orchestrator | 2026-04-17 06:05:09.217452 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:05:09.217463 | orchestrator | Friday 17 April 2026 06:05:07 +0000 (0:00:00.566) 0:10:10.278 ********** 2026-04-17 06:05:09.217473 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:09.217484 | orchestrator | 2026-04-17 06:05:09.217495 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:05:09.217505 | orchestrator | Friday 17 April 2026 06:05:08 +0000 (0:00:00.579) 0:10:10.857 ********** 2026-04-17 06:05:09.217516 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217527 | orchestrator | 2026-04-17 06:05:09.217537 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:05:09.217548 | orchestrator | Friday 17 April 2026 06:05:08 +0000 (0:00:00.134) 0:10:10.991 ********** 2026-04-17 06:05:09.217559 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:09.217570 | orchestrator | 2026-04-17 06:05:09.217580 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:05:09.217591 | orchestrator | Friday 17 April 2026 06:05:08 +0000 (0:00:00.160) 0:10:11.152 ********** 2026-04-17 06:05:09.217602 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217612 | orchestrator | 2026-04-17 06:05:09.217623 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:05:09.217634 | orchestrator | Friday 17 April 2026 06:05:08 +0000 (0:00:00.166) 0:10:11.319 ********** 2026-04-17 06:05:09.217645 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217655 | orchestrator | 2026-04-17 06:05:09.217666 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:05:09.217677 | orchestrator | Friday 17 April 2026 06:05:08 +0000 (0:00:00.148) 0:10:11.467 ********** 2026-04-17 06:05:09.217688 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217698 | orchestrator | 2026-04-17 06:05:09.217709 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:05:09.217719 | orchestrator | Friday 17 April 2026 06:05:08 +0000 (0:00:00.145) 0:10:11.612 ********** 2026-04-17 06:05:09.217730 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217741 | orchestrator | 2026-04-17 06:05:09.217751 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:05:09.217762 | orchestrator | Friday 17 April 2026 06:05:08 +0000 (0:00:00.131) 0:10:11.743 ********** 2026-04-17 06:05:09.217773 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:09.217783 | orchestrator | 2026-04-17 06:05:09.217794 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:05:09.217805 | orchestrator | Friday 17 April 2026 06:05:09 +0000 (0:00:00.141) 0:10:11.885 ********** 2026-04-17 06:05:09.217823 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.156080 | orchestrator | 2026-04-17 06:05:23.156198 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:05:23.156214 | orchestrator | Friday 17 April 2026 06:05:09 +0000 (0:00:00.146) 0:10:12.032 ********** 2026-04-17 06:05:23.156227 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.156240 | orchestrator | 2026-04-17 06:05:23.156251 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:05:23.156262 | orchestrator | Friday 17 April 2026 06:05:09 +0000 (0:00:00.143) 0:10:12.176 ********** 2026-04-17 06:05:23.156274 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.156284 | orchestrator | 2026-04-17 06:05:23.156295 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:05:23.156307 | orchestrator | Friday 17 April 2026 06:05:10 +0000 (0:00:00.576) 0:10:12.752 ********** 2026-04-17 06:05:23.156340 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156410 | orchestrator | 2026-04-17 06:05:23.156421 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:05:23.156432 | orchestrator | Friday 17 April 2026 06:05:10 +0000 (0:00:00.172) 0:10:12.925 ********** 2026-04-17 06:05:23.156443 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156454 | orchestrator | 2026-04-17 06:05:23.156465 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:05:23.156475 | orchestrator | Friday 17 April 2026 06:05:10 +0000 (0:00:00.137) 0:10:13.062 ********** 2026-04-17 06:05:23.156486 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156497 | orchestrator | 2026-04-17 06:05:23.156508 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:05:23.156519 | orchestrator | Friday 17 April 2026 06:05:10 +0000 (0:00:00.125) 0:10:13.187 ********** 2026-04-17 06:05:23.156529 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156540 | orchestrator | 2026-04-17 06:05:23.156551 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:05:23.156562 | orchestrator | Friday 17 April 2026 06:05:10 +0000 (0:00:00.126) 0:10:13.314 ********** 2026-04-17 06:05:23.156572 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156583 | orchestrator | 2026-04-17 06:05:23.156594 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:05:23.156605 | orchestrator | Friday 17 April 2026 06:05:10 +0000 (0:00:00.142) 0:10:13.457 ********** 2026-04-17 06:05:23.156618 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156631 | orchestrator | 2026-04-17 06:05:23.156643 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:05:23.156671 | orchestrator | Friday 17 April 2026 06:05:10 +0000 (0:00:00.144) 0:10:13.601 ********** 2026-04-17 06:05:23.156684 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156698 | orchestrator | 2026-04-17 06:05:23.156710 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:05:23.156723 | orchestrator | Friday 17 April 2026 06:05:11 +0000 (0:00:00.143) 0:10:13.745 ********** 2026-04-17 06:05:23.156736 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156748 | orchestrator | 2026-04-17 06:05:23.156761 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:05:23.156773 | orchestrator | Friday 17 April 2026 06:05:11 +0000 (0:00:00.139) 0:10:13.884 ********** 2026-04-17 06:05:23.156785 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156798 | orchestrator | 2026-04-17 06:05:23.156810 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:05:23.156823 | orchestrator | Friday 17 April 2026 06:05:11 +0000 (0:00:00.150) 0:10:14.035 ********** 2026-04-17 06:05:23.156836 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156849 | orchestrator | 2026-04-17 06:05:23.156862 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:05:23.156875 | orchestrator | Friday 17 April 2026 06:05:11 +0000 (0:00:00.137) 0:10:14.173 ********** 2026-04-17 06:05:23.156887 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156900 | orchestrator | 2026-04-17 06:05:23.156912 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:05:23.156924 | orchestrator | Friday 17 April 2026 06:05:11 +0000 (0:00:00.142) 0:10:14.315 ********** 2026-04-17 06:05:23.156937 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.156949 | orchestrator | 2026-04-17 06:05:23.156962 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:05:23.156974 | orchestrator | Friday 17 April 2026 06:05:12 +0000 (0:00:00.584) 0:10:14.900 ********** 2026-04-17 06:05:23.156985 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.156996 | orchestrator | 2026-04-17 06:05:23.157007 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:05:23.157017 | orchestrator | Friday 17 April 2026 06:05:13 +0000 (0:00:00.970) 0:10:15.871 ********** 2026-04-17 06:05:23.157036 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.157047 | orchestrator | 2026-04-17 06:05:23.157058 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:05:23.157070 | orchestrator | Friday 17 April 2026 06:05:14 +0000 (0:00:01.441) 0:10:17.312 ********** 2026-04-17 06:05:23.157080 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-17 06:05:23.157092 | orchestrator | 2026-04-17 06:05:23.157103 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:05:23.157114 | orchestrator | Friday 17 April 2026 06:05:14 +0000 (0:00:00.213) 0:10:17.525 ********** 2026-04-17 06:05:23.157125 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157136 | orchestrator | 2026-04-17 06:05:23.157146 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:05:23.157157 | orchestrator | Friday 17 April 2026 06:05:14 +0000 (0:00:00.165) 0:10:17.690 ********** 2026-04-17 06:05:23.157168 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157178 | orchestrator | 2026-04-17 06:05:23.157189 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:05:23.157200 | orchestrator | Friday 17 April 2026 06:05:15 +0000 (0:00:00.174) 0:10:17.865 ********** 2026-04-17 06:05:23.157228 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:05:23.157239 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:05:23.157250 | orchestrator | 2026-04-17 06:05:23.157261 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:05:23.157272 | orchestrator | Friday 17 April 2026 06:05:15 +0000 (0:00:00.839) 0:10:18.704 ********** 2026-04-17 06:05:23.157283 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.157293 | orchestrator | 2026-04-17 06:05:23.157304 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:05:23.157315 | orchestrator | Friday 17 April 2026 06:05:16 +0000 (0:00:00.534) 0:10:19.239 ********** 2026-04-17 06:05:23.157325 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157336 | orchestrator | 2026-04-17 06:05:23.157373 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:05:23.157390 | orchestrator | Friday 17 April 2026 06:05:16 +0000 (0:00:00.156) 0:10:19.395 ********** 2026-04-17 06:05:23.157407 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157425 | orchestrator | 2026-04-17 06:05:23.157445 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:05:23.157463 | orchestrator | Friday 17 April 2026 06:05:16 +0000 (0:00:00.143) 0:10:19.539 ********** 2026-04-17 06:05:23.157483 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157501 | orchestrator | 2026-04-17 06:05:23.157517 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:05:23.157528 | orchestrator | Friday 17 April 2026 06:05:16 +0000 (0:00:00.132) 0:10:19.671 ********** 2026-04-17 06:05:23.157539 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-17 06:05:23.157550 | orchestrator | 2026-04-17 06:05:23.157561 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:05:23.157572 | orchestrator | Friday 17 April 2026 06:05:17 +0000 (0:00:00.228) 0:10:19.900 ********** 2026-04-17 06:05:23.157582 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.157593 | orchestrator | 2026-04-17 06:05:23.157604 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:05:23.157615 | orchestrator | Friday 17 April 2026 06:05:19 +0000 (0:00:02.098) 0:10:21.998 ********** 2026-04-17 06:05:23.157626 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:05:23.157637 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:05:23.157654 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:05:23.157673 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157684 | orchestrator | 2026-04-17 06:05:23.157695 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:05:23.157705 | orchestrator | Friday 17 April 2026 06:05:19 +0000 (0:00:00.165) 0:10:22.164 ********** 2026-04-17 06:05:23.157716 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157727 | orchestrator | 2026-04-17 06:05:23.157737 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:05:23.157748 | orchestrator | Friday 17 April 2026 06:05:19 +0000 (0:00:00.155) 0:10:22.319 ********** 2026-04-17 06:05:23.157759 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157770 | orchestrator | 2026-04-17 06:05:23.157781 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:05:23.157791 | orchestrator | Friday 17 April 2026 06:05:19 +0000 (0:00:00.187) 0:10:22.507 ********** 2026-04-17 06:05:23.157802 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157813 | orchestrator | 2026-04-17 06:05:23.157823 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:05:23.157834 | orchestrator | Friday 17 April 2026 06:05:19 +0000 (0:00:00.161) 0:10:22.669 ********** 2026-04-17 06:05:23.157845 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157856 | orchestrator | 2026-04-17 06:05:23.157866 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:05:23.157877 | orchestrator | Friday 17 April 2026 06:05:20 +0000 (0:00:00.161) 0:10:22.830 ********** 2026-04-17 06:05:23.157887 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.157898 | orchestrator | 2026-04-17 06:05:23.157909 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:05:23.157919 | orchestrator | Friday 17 April 2026 06:05:20 +0000 (0:00:00.187) 0:10:23.018 ********** 2026-04-17 06:05:23.157930 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.157941 | orchestrator | 2026-04-17 06:05:23.157951 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:05:23.157962 | orchestrator | Friday 17 April 2026 06:05:21 +0000 (0:00:01.572) 0:10:24.590 ********** 2026-04-17 06:05:23.157973 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:23.157983 | orchestrator | 2026-04-17 06:05:23.157994 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:05:23.158005 | orchestrator | Friday 17 April 2026 06:05:21 +0000 (0:00:00.142) 0:10:24.733 ********** 2026-04-17 06:05:23.158015 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-17 06:05:23.158095 | orchestrator | 2026-04-17 06:05:23.158106 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:05:23.158117 | orchestrator | Friday 17 April 2026 06:05:22 +0000 (0:00:00.239) 0:10:24.972 ********** 2026-04-17 06:05:23.158127 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.158172 | orchestrator | 2026-04-17 06:05:23.158184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:05:23.158195 | orchestrator | Friday 17 April 2026 06:05:22 +0000 (0:00:00.153) 0:10:25.125 ********** 2026-04-17 06:05:23.158205 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.158216 | orchestrator | 2026-04-17 06:05:23.158227 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:05:23.158238 | orchestrator | Friday 17 April 2026 06:05:22 +0000 (0:00:00.167) 0:10:25.293 ********** 2026-04-17 06:05:23.158249 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:23.158260 | orchestrator | 2026-04-17 06:05:23.158281 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:05:35.573630 | orchestrator | Friday 17 April 2026 06:05:23 +0000 (0:00:00.598) 0:10:25.892 ********** 2026-04-17 06:05:35.573746 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.573762 | orchestrator | 2026-04-17 06:05:35.573788 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:05:35.573819 | orchestrator | Friday 17 April 2026 06:05:23 +0000 (0:00:00.169) 0:10:26.062 ********** 2026-04-17 06:05:35.573830 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.573840 | orchestrator | 2026-04-17 06:05:35.573849 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:05:35.573859 | orchestrator | Friday 17 April 2026 06:05:23 +0000 (0:00:00.162) 0:10:26.224 ********** 2026-04-17 06:05:35.573869 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.573878 | orchestrator | 2026-04-17 06:05:35.573888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:05:35.573898 | orchestrator | Friday 17 April 2026 06:05:23 +0000 (0:00:00.182) 0:10:26.407 ********** 2026-04-17 06:05:35.573907 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.573917 | orchestrator | 2026-04-17 06:05:35.573927 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:05:35.573936 | orchestrator | Friday 17 April 2026 06:05:23 +0000 (0:00:00.163) 0:10:26.571 ********** 2026-04-17 06:05:35.573946 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.573955 | orchestrator | 2026-04-17 06:05:35.573965 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:05:35.573974 | orchestrator | Friday 17 April 2026 06:05:23 +0000 (0:00:00.162) 0:10:26.734 ********** 2026-04-17 06:05:35.573984 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:05:35.573994 | orchestrator | 2026-04-17 06:05:35.574004 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:05:35.574013 | orchestrator | Friday 17 April 2026 06:05:24 +0000 (0:00:00.247) 0:10:26.981 ********** 2026-04-17 06:05:35.574083 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-17 06:05:35.574097 | orchestrator | 2026-04-17 06:05:35.574108 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:05:35.574120 | orchestrator | Friday 17 April 2026 06:05:24 +0000 (0:00:00.212) 0:10:27.194 ********** 2026-04-17 06:05:35.574131 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-17 06:05:35.574157 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-17 06:05:35.574169 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-17 06:05:35.574180 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-17 06:05:35.574190 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-17 06:05:35.574201 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-17 06:05:35.574212 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-17 06:05:35.574224 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:05:35.574236 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:05:35.574247 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:05:35.574258 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:05:35.574270 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:05:35.574281 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:05:35.574293 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:05:35.574304 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-17 06:05:35.574315 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-17 06:05:35.574327 | orchestrator | 2026-04-17 06:05:35.574338 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:05:35.574350 | orchestrator | Friday 17 April 2026 06:05:30 +0000 (0:00:05.687) 0:10:32.881 ********** 2026-04-17 06:05:35.574361 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574372 | orchestrator | 2026-04-17 06:05:35.574383 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:05:35.574395 | orchestrator | Friday 17 April 2026 06:05:30 +0000 (0:00:00.122) 0:10:33.004 ********** 2026-04-17 06:05:35.574432 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574443 | orchestrator | 2026-04-17 06:05:35.574453 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:05:35.574462 | orchestrator | Friday 17 April 2026 06:05:30 +0000 (0:00:00.150) 0:10:33.155 ********** 2026-04-17 06:05:35.574472 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574481 | orchestrator | 2026-04-17 06:05:35.574491 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:05:35.574500 | orchestrator | Friday 17 April 2026 06:05:30 +0000 (0:00:00.506) 0:10:33.661 ********** 2026-04-17 06:05:35.574510 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574519 | orchestrator | 2026-04-17 06:05:35.574529 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:05:35.574544 | orchestrator | Friday 17 April 2026 06:05:31 +0000 (0:00:00.173) 0:10:33.835 ********** 2026-04-17 06:05:35.574562 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574578 | orchestrator | 2026-04-17 06:05:35.574595 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:05:35.574612 | orchestrator | Friday 17 April 2026 06:05:31 +0000 (0:00:00.144) 0:10:33.979 ********** 2026-04-17 06:05:35.574630 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574644 | orchestrator | 2026-04-17 06:05:35.574653 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:05:35.574663 | orchestrator | Friday 17 April 2026 06:05:31 +0000 (0:00:00.151) 0:10:34.131 ********** 2026-04-17 06:05:35.574672 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574682 | orchestrator | 2026-04-17 06:05:35.574710 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:05:35.574720 | orchestrator | Friday 17 April 2026 06:05:31 +0000 (0:00:00.146) 0:10:34.278 ********** 2026-04-17 06:05:35.574731 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574740 | orchestrator | 2026-04-17 06:05:35.574750 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:05:35.574759 | orchestrator | Friday 17 April 2026 06:05:31 +0000 (0:00:00.159) 0:10:34.437 ********** 2026-04-17 06:05:35.574769 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574779 | orchestrator | 2026-04-17 06:05:35.574788 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:05:35.574798 | orchestrator | Friday 17 April 2026 06:05:31 +0000 (0:00:00.140) 0:10:34.577 ********** 2026-04-17 06:05:35.574807 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574821 | orchestrator | 2026-04-17 06:05:35.574831 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:05:35.574840 | orchestrator | Friday 17 April 2026 06:05:31 +0000 (0:00:00.142) 0:10:34.720 ********** 2026-04-17 06:05:35.574850 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574859 | orchestrator | 2026-04-17 06:05:35.574869 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:05:35.574879 | orchestrator | Friday 17 April 2026 06:05:32 +0000 (0:00:00.161) 0:10:34.881 ********** 2026-04-17 06:05:35.574888 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574898 | orchestrator | 2026-04-17 06:05:35.574907 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:05:35.574917 | orchestrator | Friday 17 April 2026 06:05:32 +0000 (0:00:00.141) 0:10:35.023 ********** 2026-04-17 06:05:35.574926 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574936 | orchestrator | 2026-04-17 06:05:35.574945 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:05:35.574955 | orchestrator | Friday 17 April 2026 06:05:32 +0000 (0:00:00.236) 0:10:35.259 ********** 2026-04-17 06:05:35.574964 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.574974 | orchestrator | 2026-04-17 06:05:35.574984 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:05:35.575001 | orchestrator | Friday 17 April 2026 06:05:32 +0000 (0:00:00.129) 0:10:35.389 ********** 2026-04-17 06:05:35.575011 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575021 | orchestrator | 2026-04-17 06:05:35.575036 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:05:35.575046 | orchestrator | Friday 17 April 2026 06:05:32 +0000 (0:00:00.244) 0:10:35.633 ********** 2026-04-17 06:05:35.575055 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575065 | orchestrator | 2026-04-17 06:05:35.575074 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:05:35.575084 | orchestrator | Friday 17 April 2026 06:05:33 +0000 (0:00:00.542) 0:10:36.176 ********** 2026-04-17 06:05:35.575093 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575103 | orchestrator | 2026-04-17 06:05:35.575113 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:05:35.575124 | orchestrator | Friday 17 April 2026 06:05:33 +0000 (0:00:00.144) 0:10:36.320 ********** 2026-04-17 06:05:35.575134 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575143 | orchestrator | 2026-04-17 06:05:35.575153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:05:35.575162 | orchestrator | Friday 17 April 2026 06:05:33 +0000 (0:00:00.141) 0:10:36.462 ********** 2026-04-17 06:05:35.575172 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575181 | orchestrator | 2026-04-17 06:05:35.575191 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:05:35.575200 | orchestrator | Friday 17 April 2026 06:05:33 +0000 (0:00:00.148) 0:10:36.610 ********** 2026-04-17 06:05:35.575209 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575219 | orchestrator | 2026-04-17 06:05:35.575229 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:05:35.575238 | orchestrator | Friday 17 April 2026 06:05:34 +0000 (0:00:00.166) 0:10:36.777 ********** 2026-04-17 06:05:35.575248 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575257 | orchestrator | 2026-04-17 06:05:35.575267 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:05:35.575276 | orchestrator | Friday 17 April 2026 06:05:34 +0000 (0:00:00.147) 0:10:36.924 ********** 2026-04-17 06:05:35.575286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 06:05:35.575295 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 06:05:35.575305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 06:05:35.575314 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575324 | orchestrator | 2026-04-17 06:05:35.575333 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:05:35.575343 | orchestrator | Friday 17 April 2026 06:05:34 +0000 (0:00:00.445) 0:10:37.370 ********** 2026-04-17 06:05:35.575353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 06:05:35.575362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 06:05:35.575372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 06:05:35.575381 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575391 | orchestrator | 2026-04-17 06:05:35.575400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:05:35.575409 | orchestrator | Friday 17 April 2026 06:05:35 +0000 (0:00:00.443) 0:10:37.813 ********** 2026-04-17 06:05:35.575445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 06:05:35.575462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 06:05:35.575478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 06:05:35.575489 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:05:35.575498 | orchestrator | 2026-04-17 06:05:35.575515 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:06:15.514207 | orchestrator | Friday 17 April 2026 06:05:35 +0000 (0:00:00.497) 0:10:38.311 ********** 2026-04-17 06:06:15.514322 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:06:15.514339 | orchestrator | 2026-04-17 06:06:15.514351 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:06:15.514363 | orchestrator | Friday 17 April 2026 06:05:35 +0000 (0:00:00.164) 0:10:38.475 ********** 2026-04-17 06:06:15.514375 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-17 06:06:15.514386 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:06:15.514397 | orchestrator | 2026-04-17 06:06:15.514409 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:06:15.514420 | orchestrator | Friday 17 April 2026 06:05:36 +0000 (0:00:00.367) 0:10:38.843 ********** 2026-04-17 06:06:15.514431 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:06:15.514442 | orchestrator | 2026-04-17 06:06:15.514453 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:06:15.514464 | orchestrator | Friday 17 April 2026 06:05:36 +0000 (0:00:00.869) 0:10:39.712 ********** 2026-04-17 06:06:15.514474 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:06:15.514486 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:06:15.514497 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:06:15.514508 | orchestrator | 2026-04-17 06:06:15.514519 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-17 06:06:15.514530 | orchestrator | Friday 17 April 2026 06:05:38 +0000 (0:00:01.613) 0:10:41.326 ********** 2026-04-17 06:06:15.514540 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-04-17 06:06:15.514551 | orchestrator | 2026-04-17 06:06:15.514562 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-17 06:06:15.514572 | orchestrator | Friday 17 April 2026 06:05:39 +0000 (0:00:00.615) 0:10:41.941 ********** 2026-04-17 06:06:15.514583 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:06:15.514594 | orchestrator | 2026-04-17 06:06:15.514605 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-17 06:06:15.514661 | orchestrator | Friday 17 April 2026 06:05:39 +0000 (0:00:00.496) 0:10:42.438 ********** 2026-04-17 06:06:15.514674 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:06:15.514685 | orchestrator | 2026-04-17 06:06:15.514712 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-17 06:06:15.514723 | orchestrator | Friday 17 April 2026 06:05:39 +0000 (0:00:00.147) 0:10:42.586 ********** 2026-04-17 06:06:15.514734 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 06:06:15.514748 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 06:06:15.514761 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 06:06:15.514773 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-17 06:06:15.514786 | orchestrator | 2026-04-17 06:06:15.514798 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-17 06:06:15.514810 | orchestrator | Friday 17 April 2026 06:05:45 +0000 (0:00:06.082) 0:10:48.668 ********** 2026-04-17 06:06:15.514823 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:06:15.514836 | orchestrator | 2026-04-17 06:06:15.514848 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-17 06:06:15.514861 | orchestrator | Friday 17 April 2026 06:05:46 +0000 (0:00:00.210) 0:10:48.879 ********** 2026-04-17 06:06:15.514874 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 06:06:15.514886 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 06:06:15.514898 | orchestrator | 2026-04-17 06:06:15.514911 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:06:15.514923 | orchestrator | Friday 17 April 2026 06:05:48 +0000 (0:00:02.310) 0:10:51.190 ********** 2026-04-17 06:06:15.514936 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 06:06:15.514974 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 06:06:15.514986 | orchestrator | 2026-04-17 06:06:15.514998 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-17 06:06:15.515011 | orchestrator | Friday 17 April 2026 06:05:49 +0000 (0:00:01.039) 0:10:52.230 ********** 2026-04-17 06:06:15.515023 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:06:15.515036 | orchestrator | 2026-04-17 06:06:15.515048 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-17 06:06:15.515060 | orchestrator | Friday 17 April 2026 06:05:50 +0000 (0:00:00.531) 0:10:52.761 ********** 2026-04-17 06:06:15.515073 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:06:15.515085 | orchestrator | 2026-04-17 06:06:15.515097 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-17 06:06:15.515108 | orchestrator | Friday 17 April 2026 06:05:50 +0000 (0:00:00.142) 0:10:52.903 ********** 2026-04-17 06:06:15.515119 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:06:15.515130 | orchestrator | 2026-04-17 06:06:15.515140 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-17 06:06:15.515151 | orchestrator | Friday 17 April 2026 06:05:50 +0000 (0:00:00.156) 0:10:53.060 ********** 2026-04-17 06:06:15.515162 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-04-17 06:06:15.515172 | orchestrator | 2026-04-17 06:06:15.515183 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-17 06:06:15.515194 | orchestrator | Friday 17 April 2026 06:05:51 +0000 (0:00:00.959) 0:10:54.019 ********** 2026-04-17 06:06:15.515204 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:06:15.515215 | orchestrator | 2026-04-17 06:06:15.515226 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-17 06:06:15.515237 | orchestrator | Friday 17 April 2026 06:05:51 +0000 (0:00:00.164) 0:10:54.184 ********** 2026-04-17 06:06:15.515248 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:06:15.515259 | orchestrator | 2026-04-17 06:06:15.515269 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-17 06:06:15.515298 | orchestrator | Friday 17 April 2026 06:05:51 +0000 (0:00:00.175) 0:10:54.359 ********** 2026-04-17 06:06:15.515310 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-04-17 06:06:15.515321 | orchestrator | 2026-04-17 06:06:15.515331 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-17 06:06:15.515342 | orchestrator | Friday 17 April 2026 06:05:52 +0000 (0:00:00.575) 0:10:54.935 ********** 2026-04-17 06:06:15.515353 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:06:15.515364 | orchestrator | 2026-04-17 06:06:15.515374 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-17 06:06:15.515385 | orchestrator | Friday 17 April 2026 06:05:53 +0000 (0:00:01.113) 0:10:56.049 ********** 2026-04-17 06:06:15.515396 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:06:15.515407 | orchestrator | 2026-04-17 06:06:15.515417 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-17 06:06:15.515428 | orchestrator | Friday 17 April 2026 06:05:54 +0000 (0:00:00.928) 0:10:56.977 ********** 2026-04-17 06:06:15.515439 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:06:15.515450 | orchestrator | 2026-04-17 06:06:15.515460 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-17 06:06:15.515471 | orchestrator | Friday 17 April 2026 06:05:55 +0000 (0:00:01.431) 0:10:58.409 ********** 2026-04-17 06:06:15.515482 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:06:15.515493 | orchestrator | 2026-04-17 06:06:15.515504 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-17 06:06:15.515515 | orchestrator | Friday 17 April 2026 06:05:58 +0000 (0:00:02.802) 0:11:01.212 ********** 2026-04-17 06:06:15.515526 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:06:15.515536 | orchestrator | 2026-04-17 06:06:15.515547 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-17 06:06:15.515565 | orchestrator | 2026-04-17 06:06:15.515576 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-17 06:06:15.515587 | orchestrator | Friday 17 April 2026 06:05:59 +0000 (0:00:00.608) 0:11:01.821 ********** 2026-04-17 06:06:15.515598 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:06:15.515609 | orchestrator | 2026-04-17 06:06:15.515644 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-17 06:06:15.515655 | orchestrator | Friday 17 April 2026 06:06:10 +0000 (0:00:11.894) 0:11:13.715 ********** 2026-04-17 06:06:15.515666 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:06:15.515677 | orchestrator | 2026-04-17 06:06:15.515694 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:06:15.515705 | orchestrator | Friday 17 April 2026 06:06:12 +0000 (0:00:01.567) 0:11:15.283 ********** 2026-04-17 06:06:15.515716 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-17 06:06:15.515727 | orchestrator | 2026-04-17 06:06:15.515738 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:06:15.515749 | orchestrator | Friday 17 April 2026 06:06:12 +0000 (0:00:00.233) 0:11:15.517 ********** 2026-04-17 06:06:15.515760 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:15.515771 | orchestrator | 2026-04-17 06:06:15.515782 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:06:15.515793 | orchestrator | Friday 17 April 2026 06:06:13 +0000 (0:00:00.411) 0:11:15.929 ********** 2026-04-17 06:06:15.515804 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:15.515815 | orchestrator | 2026-04-17 06:06:15.515826 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:06:15.515837 | orchestrator | Friday 17 April 2026 06:06:13 +0000 (0:00:00.150) 0:11:16.080 ********** 2026-04-17 06:06:15.515848 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:15.515859 | orchestrator | 2026-04-17 06:06:15.515870 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:06:15.515881 | orchestrator | Friday 17 April 2026 06:06:13 +0000 (0:00:00.419) 0:11:16.499 ********** 2026-04-17 06:06:15.515892 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:15.515903 | orchestrator | 2026-04-17 06:06:15.515915 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:06:15.515926 | orchestrator | Friday 17 April 2026 06:06:13 +0000 (0:00:00.133) 0:11:16.633 ********** 2026-04-17 06:06:15.515936 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:15.515947 | orchestrator | 2026-04-17 06:06:15.515958 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:06:15.515969 | orchestrator | Friday 17 April 2026 06:06:14 +0000 (0:00:00.130) 0:11:16.763 ********** 2026-04-17 06:06:15.515980 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:15.515991 | orchestrator | 2026-04-17 06:06:15.516002 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:06:15.516013 | orchestrator | Friday 17 April 2026 06:06:14 +0000 (0:00:00.140) 0:11:16.904 ********** 2026-04-17 06:06:15.516024 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:15.516035 | orchestrator | 2026-04-17 06:06:15.516047 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:06:15.516057 | orchestrator | Friday 17 April 2026 06:06:14 +0000 (0:00:00.143) 0:11:17.048 ********** 2026-04-17 06:06:15.516068 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:15.516079 | orchestrator | 2026-04-17 06:06:15.516091 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:06:15.516101 | orchestrator | Friday 17 April 2026 06:06:14 +0000 (0:00:00.129) 0:11:17.177 ********** 2026-04-17 06:06:15.516113 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:06:15.516124 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:06:15.516135 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:06:15.516146 | orchestrator | 2026-04-17 06:06:15.516163 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:06:15.516174 | orchestrator | Friday 17 April 2026 06:06:15 +0000 (0:00:00.856) 0:11:18.033 ********** 2026-04-17 06:06:15.516185 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:15.516196 | orchestrator | 2026-04-17 06:06:15.516207 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:06:15.516225 | orchestrator | Friday 17 April 2026 06:06:15 +0000 (0:00:00.216) 0:11:18.250 ********** 2026-04-17 06:06:23.153139 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:06:23.153242 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:06:23.153254 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:06:23.153262 | orchestrator | 2026-04-17 06:06:23.153271 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:06:23.153280 | orchestrator | Friday 17 April 2026 06:06:17 +0000 (0:00:02.214) 0:11:20.464 ********** 2026-04-17 06:06:23.153288 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 06:06:23.153296 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 06:06:23.153303 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 06:06:23.153311 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153318 | orchestrator | 2026-04-17 06:06:23.153326 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:06:23.153333 | orchestrator | Friday 17 April 2026 06:06:18 +0000 (0:00:00.421) 0:11:20.886 ********** 2026-04-17 06:06:23.153342 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:06:23.153353 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:06:23.153360 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:06:23.153367 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153375 | orchestrator | 2026-04-17 06:06:23.153397 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:06:23.153405 | orchestrator | Friday 17 April 2026 06:06:18 +0000 (0:00:00.546) 0:11:21.432 ********** 2026-04-17 06:06:23.153414 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:23.153424 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:23.153431 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:23.153455 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153463 | orchestrator | 2026-04-17 06:06:23.153470 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:06:23.153478 | orchestrator | Friday 17 April 2026 06:06:18 +0000 (0:00:00.169) 0:11:21.602 ********** 2026-04-17 06:06:23.153487 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:06:16.228287', 'end': '2026-04-17 06:06:16.261033', 'delta': '0:00:00.032746', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:06:23.153511 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:06:16.745378', 'end': '2026-04-17 06:06:16.794377', 'delta': '0:00:00.048999', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:06:23.153520 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:06:17.283713', 'end': '2026-04-17 06:06:17.327149', 'delta': '0:00:00.043436', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:06:23.153528 | orchestrator | 2026-04-17 06:06:23.153536 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:06:23.153543 | orchestrator | Friday 17 April 2026 06:06:19 +0000 (0:00:00.190) 0:11:21.793 ********** 2026-04-17 06:06:23.153550 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:23.153558 | orchestrator | 2026-04-17 06:06:23.153569 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:06:23.153576 | orchestrator | Friday 17 April 2026 06:06:19 +0000 (0:00:00.246) 0:11:22.039 ********** 2026-04-17 06:06:23.153583 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153591 | orchestrator | 2026-04-17 06:06:23.153598 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:06:23.153605 | orchestrator | Friday 17 April 2026 06:06:19 +0000 (0:00:00.249) 0:11:22.289 ********** 2026-04-17 06:06:23.153612 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:23.153619 | orchestrator | 2026-04-17 06:06:23.153627 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:06:23.153634 | orchestrator | Friday 17 April 2026 06:06:19 +0000 (0:00:00.135) 0:11:22.424 ********** 2026-04-17 06:06:23.153641 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:06:23.153648 | orchestrator | 2026-04-17 06:06:23.153699 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:06:23.153710 | orchestrator | Friday 17 April 2026 06:06:21 +0000 (0:00:01.920) 0:11:24.345 ********** 2026-04-17 06:06:23.153717 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:23.153724 | orchestrator | 2026-04-17 06:06:23.153732 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:06:23.153739 | orchestrator | Friday 17 April 2026 06:06:21 +0000 (0:00:00.139) 0:11:24.484 ********** 2026-04-17 06:06:23.153746 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153753 | orchestrator | 2026-04-17 06:06:23.153760 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:06:23.153768 | orchestrator | Friday 17 April 2026 06:06:21 +0000 (0:00:00.115) 0:11:24.600 ********** 2026-04-17 06:06:23.153775 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153782 | orchestrator | 2026-04-17 06:06:23.153789 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:06:23.153796 | orchestrator | Friday 17 April 2026 06:06:22 +0000 (0:00:00.248) 0:11:24.849 ********** 2026-04-17 06:06:23.153803 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153811 | orchestrator | 2026-04-17 06:06:23.153818 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:06:23.153825 | orchestrator | Friday 17 April 2026 06:06:22 +0000 (0:00:00.108) 0:11:24.957 ********** 2026-04-17 06:06:23.153832 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153839 | orchestrator | 2026-04-17 06:06:23.153846 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:06:23.153853 | orchestrator | Friday 17 April 2026 06:06:22 +0000 (0:00:00.350) 0:11:25.307 ********** 2026-04-17 06:06:23.153860 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153867 | orchestrator | 2026-04-17 06:06:23.153875 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:06:23.153882 | orchestrator | Friday 17 April 2026 06:06:22 +0000 (0:00:00.140) 0:11:25.448 ********** 2026-04-17 06:06:23.153889 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153896 | orchestrator | 2026-04-17 06:06:23.153904 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:06:23.153911 | orchestrator | Friday 17 April 2026 06:06:22 +0000 (0:00:00.167) 0:11:25.616 ********** 2026-04-17 06:06:23.153918 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153925 | orchestrator | 2026-04-17 06:06:23.153932 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:06:23.153939 | orchestrator | Friday 17 April 2026 06:06:22 +0000 (0:00:00.126) 0:11:25.743 ********** 2026-04-17 06:06:23.153947 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.153954 | orchestrator | 2026-04-17 06:06:23.153961 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:06:23.153974 | orchestrator | Friday 17 April 2026 06:06:23 +0000 (0:00:00.152) 0:11:25.895 ********** 2026-04-17 06:06:23.664135 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.664262 | orchestrator | 2026-04-17 06:06:23.664289 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:06:23.664310 | orchestrator | Friday 17 April 2026 06:06:23 +0000 (0:00:00.157) 0:11:26.052 ********** 2026-04-17 06:06:23.664324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:06:23.664339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:06:23.664375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:06:23.664405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:06:23.664419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:06:23.664439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:06:23.664459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:06:23.664509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41525a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:06:23.664554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:06:23.664576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:06:23.664595 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:23.664614 | orchestrator | 2026-04-17 06:06:23.664634 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:06:23.664706 | orchestrator | Friday 17 April 2026 06:06:23 +0000 (0:00:00.272) 0:11:26.325 ********** 2026-04-17 06:06:23.664730 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:23.664757 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:23.664794 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:27.757343 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:27.757474 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:27.757489 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:27.757499 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:27.757528 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41525a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1', 'scsi-SQEMU_QEMU_HARDDISK_41525a0f-b2ac-45bd-994e-16d35250beaa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:27.757555 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:27.757565 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:06:27.757575 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:27.757586 | orchestrator | 2026-04-17 06:06:27.757596 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:06:27.757606 | orchestrator | Friday 17 April 2026 06:06:23 +0000 (0:00:00.236) 0:11:26.561 ********** 2026-04-17 06:06:27.757614 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:27.757624 | orchestrator | 2026-04-17 06:06:27.757633 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:06:27.757642 | orchestrator | Friday 17 April 2026 06:06:24 +0000 (0:00:00.500) 0:11:27.062 ********** 2026-04-17 06:06:27.757651 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:27.757659 | orchestrator | 2026-04-17 06:06:27.757668 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:06:27.757727 | orchestrator | Friday 17 April 2026 06:06:24 +0000 (0:00:00.146) 0:11:27.208 ********** 2026-04-17 06:06:27.757737 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:27.757745 | orchestrator | 2026-04-17 06:06:27.757754 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:06:27.757763 | orchestrator | Friday 17 April 2026 06:06:25 +0000 (0:00:01.496) 0:11:28.705 ********** 2026-04-17 06:06:27.757772 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:27.757780 | orchestrator | 2026-04-17 06:06:27.757789 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:06:27.757798 | orchestrator | Friday 17 April 2026 06:06:26 +0000 (0:00:00.131) 0:11:28.837 ********** 2026-04-17 06:06:27.757834 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:27.757843 | orchestrator | 2026-04-17 06:06:27.757852 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:06:27.757861 | orchestrator | Friday 17 April 2026 06:06:26 +0000 (0:00:00.250) 0:11:29.087 ********** 2026-04-17 06:06:27.757871 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:27.757881 | orchestrator | 2026-04-17 06:06:27.757890 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:06:27.757907 | orchestrator | Friday 17 April 2026 06:06:26 +0000 (0:00:00.512) 0:11:29.600 ********** 2026-04-17 06:06:27.757917 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-17 06:06:27.757927 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:06:27.757937 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-17 06:06:27.757947 | orchestrator | 2026-04-17 06:06:27.757957 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:06:27.757967 | orchestrator | Friday 17 April 2026 06:06:27 +0000 (0:00:00.734) 0:11:30.334 ********** 2026-04-17 06:06:27.757977 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 06:06:27.757988 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 06:06:27.757997 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 06:06:27.758008 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:27.758097 | orchestrator | 2026-04-17 06:06:27.758118 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:06:38.284550 | orchestrator | Friday 17 April 2026 06:06:27 +0000 (0:00:00.167) 0:11:30.502 ********** 2026-04-17 06:06:38.284703 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.284853 | orchestrator | 2026-04-17 06:06:38.284874 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:06:38.284894 | orchestrator | Friday 17 April 2026 06:06:27 +0000 (0:00:00.136) 0:11:30.639 ********** 2026-04-17 06:06:38.284913 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:06:38.284933 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:06:38.284952 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:06:38.284971 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:06:38.284989 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:06:38.285007 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:06:38.285025 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:06:38.285062 | orchestrator | 2026-04-17 06:06:38.285085 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:06:38.285106 | orchestrator | Friday 17 April 2026 06:06:28 +0000 (0:00:00.877) 0:11:31.516 ********** 2026-04-17 06:06:38.285126 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:06:38.285164 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:06:38.285179 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:06:38.285210 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:06:38.285222 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:06:38.285236 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:06:38.285250 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:06:38.285262 | orchestrator | 2026-04-17 06:06:38.285275 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:06:38.285287 | orchestrator | Friday 17 April 2026 06:06:30 +0000 (0:00:01.743) 0:11:33.260 ********** 2026-04-17 06:06:38.285300 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-17 06:06:38.285314 | orchestrator | 2026-04-17 06:06:38.285326 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:06:38.285339 | orchestrator | Friday 17 April 2026 06:06:30 +0000 (0:00:00.218) 0:11:33.478 ********** 2026-04-17 06:06:38.285351 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-17 06:06:38.285386 | orchestrator | 2026-04-17 06:06:38.285400 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:06:38.285413 | orchestrator | Friday 17 April 2026 06:06:30 +0000 (0:00:00.237) 0:11:33.716 ********** 2026-04-17 06:06:38.285424 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:38.285436 | orchestrator | 2026-04-17 06:06:38.285447 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:06:38.285472 | orchestrator | Friday 17 April 2026 06:06:31 +0000 (0:00:00.521) 0:11:34.237 ********** 2026-04-17 06:06:38.285497 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.285508 | orchestrator | 2026-04-17 06:06:38.285519 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:06:38.285530 | orchestrator | Friday 17 April 2026 06:06:31 +0000 (0:00:00.140) 0:11:34.378 ********** 2026-04-17 06:06:38.285540 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.285551 | orchestrator | 2026-04-17 06:06:38.285562 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:06:38.285585 | orchestrator | Friday 17 April 2026 06:06:32 +0000 (0:00:00.503) 0:11:34.881 ********** 2026-04-17 06:06:38.285596 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.285607 | orchestrator | 2026-04-17 06:06:38.285617 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:06:38.285628 | orchestrator | Friday 17 April 2026 06:06:32 +0000 (0:00:00.143) 0:11:35.025 ********** 2026-04-17 06:06:38.285639 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:38.285650 | orchestrator | 2026-04-17 06:06:38.285660 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:06:38.285671 | orchestrator | Friday 17 April 2026 06:06:32 +0000 (0:00:00.545) 0:11:35.570 ********** 2026-04-17 06:06:38.285682 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.285693 | orchestrator | 2026-04-17 06:06:38.285704 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:06:38.285715 | orchestrator | Friday 17 April 2026 06:06:32 +0000 (0:00:00.142) 0:11:35.713 ********** 2026-04-17 06:06:38.285748 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.285759 | orchestrator | 2026-04-17 06:06:38.285770 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:06:38.285781 | orchestrator | Friday 17 April 2026 06:06:33 +0000 (0:00:00.144) 0:11:35.858 ********** 2026-04-17 06:06:38.285792 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:38.285803 | orchestrator | 2026-04-17 06:06:38.285826 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:06:38.285837 | orchestrator | Friday 17 April 2026 06:06:33 +0000 (0:00:00.546) 0:11:36.404 ********** 2026-04-17 06:06:38.285848 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:38.285859 | orchestrator | 2026-04-17 06:06:38.285869 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:06:38.285901 | orchestrator | Friday 17 April 2026 06:06:34 +0000 (0:00:00.586) 0:11:36.991 ********** 2026-04-17 06:06:38.285913 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.285923 | orchestrator | 2026-04-17 06:06:38.285934 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:06:38.285945 | orchestrator | Friday 17 April 2026 06:06:34 +0000 (0:00:00.146) 0:11:37.138 ********** 2026-04-17 06:06:38.285956 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:38.285966 | orchestrator | 2026-04-17 06:06:38.285977 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:06:38.285988 | orchestrator | Friday 17 April 2026 06:06:34 +0000 (0:00:00.163) 0:11:37.301 ********** 2026-04-17 06:06:38.286010 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286084 | orchestrator | 2026-04-17 06:06:38.286096 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:06:38.286107 | orchestrator | Friday 17 April 2026 06:06:34 +0000 (0:00:00.141) 0:11:37.443 ********** 2026-04-17 06:06:38.286128 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286139 | orchestrator | 2026-04-17 06:06:38.286150 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:06:38.286160 | orchestrator | Friday 17 April 2026 06:06:34 +0000 (0:00:00.141) 0:11:37.585 ********** 2026-04-17 06:06:38.286171 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286182 | orchestrator | 2026-04-17 06:06:38.286193 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:06:38.286204 | orchestrator | Friday 17 April 2026 06:06:34 +0000 (0:00:00.132) 0:11:37.717 ********** 2026-04-17 06:06:38.286214 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286225 | orchestrator | 2026-04-17 06:06:38.286236 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:06:38.286247 | orchestrator | Friday 17 April 2026 06:06:35 +0000 (0:00:00.131) 0:11:37.848 ********** 2026-04-17 06:06:38.286261 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286279 | orchestrator | 2026-04-17 06:06:38.286311 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:06:38.286338 | orchestrator | Friday 17 April 2026 06:06:35 +0000 (0:00:00.532) 0:11:38.380 ********** 2026-04-17 06:06:38.286356 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:38.286374 | orchestrator | 2026-04-17 06:06:38.286389 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:06:38.286407 | orchestrator | Friday 17 April 2026 06:06:35 +0000 (0:00:00.155) 0:11:38.536 ********** 2026-04-17 06:06:38.286427 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:38.286445 | orchestrator | 2026-04-17 06:06:38.286464 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:06:38.286484 | orchestrator | Friday 17 April 2026 06:06:35 +0000 (0:00:00.175) 0:11:38.711 ********** 2026-04-17 06:06:38.286501 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:38.286520 | orchestrator | 2026-04-17 06:06:38.286537 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:06:38.286556 | orchestrator | Friday 17 April 2026 06:06:36 +0000 (0:00:00.273) 0:11:38.985 ********** 2026-04-17 06:06:38.286575 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286593 | orchestrator | 2026-04-17 06:06:38.286612 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:06:38.286624 | orchestrator | Friday 17 April 2026 06:06:36 +0000 (0:00:00.178) 0:11:39.163 ********** 2026-04-17 06:06:38.286634 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286645 | orchestrator | 2026-04-17 06:06:38.286656 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:06:38.286666 | orchestrator | Friday 17 April 2026 06:06:36 +0000 (0:00:00.146) 0:11:39.310 ********** 2026-04-17 06:06:38.286677 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286688 | orchestrator | 2026-04-17 06:06:38.286699 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:06:38.286709 | orchestrator | Friday 17 April 2026 06:06:36 +0000 (0:00:00.158) 0:11:39.468 ********** 2026-04-17 06:06:38.286743 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286754 | orchestrator | 2026-04-17 06:06:38.286765 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:06:38.286776 | orchestrator | Friday 17 April 2026 06:06:36 +0000 (0:00:00.141) 0:11:39.610 ********** 2026-04-17 06:06:38.286787 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286797 | orchestrator | 2026-04-17 06:06:38.286808 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:06:38.286818 | orchestrator | Friday 17 April 2026 06:06:37 +0000 (0:00:00.145) 0:11:39.756 ********** 2026-04-17 06:06:38.286829 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286840 | orchestrator | 2026-04-17 06:06:38.286851 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:06:38.286861 | orchestrator | Friday 17 April 2026 06:06:37 +0000 (0:00:00.131) 0:11:39.887 ********** 2026-04-17 06:06:38.286884 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286895 | orchestrator | 2026-04-17 06:06:38.286906 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:06:38.286917 | orchestrator | Friday 17 April 2026 06:06:37 +0000 (0:00:00.138) 0:11:40.026 ********** 2026-04-17 06:06:38.286928 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286938 | orchestrator | 2026-04-17 06:06:38.286949 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:06:38.286960 | orchestrator | Friday 17 April 2026 06:06:37 +0000 (0:00:00.149) 0:11:40.175 ********** 2026-04-17 06:06:38.286971 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.286981 | orchestrator | 2026-04-17 06:06:38.286992 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:06:38.287002 | orchestrator | Friday 17 April 2026 06:06:37 +0000 (0:00:00.121) 0:11:40.296 ********** 2026-04-17 06:06:38.287013 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.287024 | orchestrator | 2026-04-17 06:06:38.287034 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:06:38.287045 | orchestrator | Friday 17 April 2026 06:06:38 +0000 (0:00:00.571) 0:11:40.868 ********** 2026-04-17 06:06:38.287056 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:38.287067 | orchestrator | 2026-04-17 06:06:38.287091 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:06:55.671116 | orchestrator | Friday 17 April 2026 06:06:38 +0000 (0:00:00.154) 0:11:41.022 ********** 2026-04-17 06:06:55.671267 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.671297 | orchestrator | 2026-04-17 06:06:55.671317 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:06:55.671336 | orchestrator | Friday 17 April 2026 06:06:38 +0000 (0:00:00.226) 0:11:41.249 ********** 2026-04-17 06:06:55.671356 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:55.671375 | orchestrator | 2026-04-17 06:06:55.671387 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:06:55.671398 | orchestrator | Friday 17 April 2026 06:06:39 +0000 (0:00:00.929) 0:11:42.179 ********** 2026-04-17 06:06:55.671409 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:55.671420 | orchestrator | 2026-04-17 06:06:55.671431 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:06:55.671442 | orchestrator | Friday 17 April 2026 06:06:40 +0000 (0:00:01.453) 0:11:43.632 ********** 2026-04-17 06:06:55.671453 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-17 06:06:55.671465 | orchestrator | 2026-04-17 06:06:55.671476 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:06:55.671487 | orchestrator | Friday 17 April 2026 06:06:41 +0000 (0:00:00.224) 0:11:43.856 ********** 2026-04-17 06:06:55.671498 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.671509 | orchestrator | 2026-04-17 06:06:55.671520 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:06:55.671531 | orchestrator | Friday 17 April 2026 06:06:41 +0000 (0:00:00.165) 0:11:44.022 ********** 2026-04-17 06:06:55.671542 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.671553 | orchestrator | 2026-04-17 06:06:55.671580 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:06:55.671592 | orchestrator | Friday 17 April 2026 06:06:41 +0000 (0:00:00.152) 0:11:44.174 ********** 2026-04-17 06:06:55.671603 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:06:55.671614 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:06:55.671626 | orchestrator | 2026-04-17 06:06:55.671637 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:06:55.671650 | orchestrator | Friday 17 April 2026 06:06:42 +0000 (0:00:00.912) 0:11:45.087 ********** 2026-04-17 06:06:55.671662 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:55.671701 | orchestrator | 2026-04-17 06:06:55.671714 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:06:55.671727 | orchestrator | Friday 17 April 2026 06:06:42 +0000 (0:00:00.451) 0:11:45.539 ********** 2026-04-17 06:06:55.671740 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.671752 | orchestrator | 2026-04-17 06:06:55.671765 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:06:55.671778 | orchestrator | Friday 17 April 2026 06:06:42 +0000 (0:00:00.157) 0:11:45.696 ********** 2026-04-17 06:06:55.671791 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.671852 | orchestrator | 2026-04-17 06:06:55.671864 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:06:55.671877 | orchestrator | Friday 17 April 2026 06:06:43 +0000 (0:00:00.503) 0:11:46.200 ********** 2026-04-17 06:06:55.671889 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.671902 | orchestrator | 2026-04-17 06:06:55.671915 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:06:55.671928 | orchestrator | Friday 17 April 2026 06:06:43 +0000 (0:00:00.152) 0:11:46.352 ********** 2026-04-17 06:06:55.671941 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-17 06:06:55.671953 | orchestrator | 2026-04-17 06:06:55.671966 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:06:55.671980 | orchestrator | Friday 17 April 2026 06:06:43 +0000 (0:00:00.225) 0:11:46.577 ********** 2026-04-17 06:06:55.671993 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:55.672004 | orchestrator | 2026-04-17 06:06:55.672015 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:06:55.672026 | orchestrator | Friday 17 April 2026 06:06:44 +0000 (0:00:00.704) 0:11:47.282 ********** 2026-04-17 06:06:55.672037 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:06:55.672048 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:06:55.672058 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:06:55.672069 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672080 | orchestrator | 2026-04-17 06:06:55.672090 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:06:55.672101 | orchestrator | Friday 17 April 2026 06:06:44 +0000 (0:00:00.154) 0:11:47.436 ********** 2026-04-17 06:06:55.672112 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672123 | orchestrator | 2026-04-17 06:06:55.672134 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:06:55.672145 | orchestrator | Friday 17 April 2026 06:06:44 +0000 (0:00:00.145) 0:11:47.581 ********** 2026-04-17 06:06:55.672155 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672166 | orchestrator | 2026-04-17 06:06:55.672177 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:06:55.672188 | orchestrator | Friday 17 April 2026 06:06:45 +0000 (0:00:00.177) 0:11:47.759 ********** 2026-04-17 06:06:55.672199 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672209 | orchestrator | 2026-04-17 06:06:55.672220 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:06:55.672231 | orchestrator | Friday 17 April 2026 06:06:45 +0000 (0:00:00.164) 0:11:47.924 ********** 2026-04-17 06:06:55.672242 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672252 | orchestrator | 2026-04-17 06:06:55.672283 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:06:55.672295 | orchestrator | Friday 17 April 2026 06:06:45 +0000 (0:00:00.162) 0:11:48.086 ********** 2026-04-17 06:06:55.672305 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672316 | orchestrator | 2026-04-17 06:06:55.672327 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:06:55.672338 | orchestrator | Friday 17 April 2026 06:06:45 +0000 (0:00:00.171) 0:11:48.257 ********** 2026-04-17 06:06:55.672357 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:55.672368 | orchestrator | 2026-04-17 06:06:55.672378 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:06:55.672389 | orchestrator | Friday 17 April 2026 06:06:47 +0000 (0:00:01.555) 0:11:49.813 ********** 2026-04-17 06:06:55.672400 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:55.672410 | orchestrator | 2026-04-17 06:06:55.672421 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:06:55.672431 | orchestrator | Friday 17 April 2026 06:06:47 +0000 (0:00:00.150) 0:11:49.963 ********** 2026-04-17 06:06:55.672442 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-17 06:06:55.672453 | orchestrator | 2026-04-17 06:06:55.672463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:06:55.672474 | orchestrator | Friday 17 April 2026 06:06:47 +0000 (0:00:00.569) 0:11:50.532 ********** 2026-04-17 06:06:55.672484 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672495 | orchestrator | 2026-04-17 06:06:55.672505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:06:55.672516 | orchestrator | Friday 17 April 2026 06:06:47 +0000 (0:00:00.145) 0:11:50.678 ********** 2026-04-17 06:06:55.672526 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672537 | orchestrator | 2026-04-17 06:06:55.672554 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:06:55.672565 | orchestrator | Friday 17 April 2026 06:06:48 +0000 (0:00:00.170) 0:11:50.849 ********** 2026-04-17 06:06:55.672575 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672586 | orchestrator | 2026-04-17 06:06:55.672597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:06:55.672607 | orchestrator | Friday 17 April 2026 06:06:48 +0000 (0:00:00.155) 0:11:51.004 ********** 2026-04-17 06:06:55.672618 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672629 | orchestrator | 2026-04-17 06:06:55.672640 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:06:55.672650 | orchestrator | Friday 17 April 2026 06:06:48 +0000 (0:00:00.153) 0:11:51.157 ********** 2026-04-17 06:06:55.672661 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672672 | orchestrator | 2026-04-17 06:06:55.672682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:06:55.672693 | orchestrator | Friday 17 April 2026 06:06:48 +0000 (0:00:00.175) 0:11:51.333 ********** 2026-04-17 06:06:55.672704 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672714 | orchestrator | 2026-04-17 06:06:55.672725 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:06:55.672735 | orchestrator | Friday 17 April 2026 06:06:48 +0000 (0:00:00.174) 0:11:51.507 ********** 2026-04-17 06:06:55.672746 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672757 | orchestrator | 2026-04-17 06:06:55.672767 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:06:55.672778 | orchestrator | Friday 17 April 2026 06:06:48 +0000 (0:00:00.164) 0:11:51.672 ********** 2026-04-17 06:06:55.672788 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:06:55.672815 | orchestrator | 2026-04-17 06:06:55.672826 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:06:55.672837 | orchestrator | Friday 17 April 2026 06:06:49 +0000 (0:00:00.170) 0:11:51.842 ********** 2026-04-17 06:06:55.672847 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:06:55.672858 | orchestrator | 2026-04-17 06:06:55.672869 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:06:55.672889 | orchestrator | Friday 17 April 2026 06:06:49 +0000 (0:00:00.244) 0:11:52.087 ********** 2026-04-17 06:06:55.672908 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-17 06:06:55.672926 | orchestrator | 2026-04-17 06:06:55.672945 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:06:55.672974 | orchestrator | Friday 17 April 2026 06:06:49 +0000 (0:00:00.535) 0:11:52.622 ********** 2026-04-17 06:06:55.672994 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-17 06:06:55.673013 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-17 06:06:55.673034 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-17 06:06:55.673052 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-17 06:06:55.673071 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-17 06:06:55.673089 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-17 06:06:55.673106 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-17 06:06:55.673122 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:06:55.673140 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:06:55.673160 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:06:55.673178 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:06:55.673196 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:06:55.673213 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:06:55.673230 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:06:55.673247 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-17 06:06:55.673265 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-17 06:06:55.673282 | orchestrator | 2026-04-17 06:06:55.673313 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:07:14.383904 | orchestrator | Friday 17 April 2026 06:06:55 +0000 (0:00:05.788) 0:11:58.411 ********** 2026-04-17 06:07:14.384025 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384042 | orchestrator | 2026-04-17 06:07:14.384055 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:07:14.384066 | orchestrator | Friday 17 April 2026 06:06:55 +0000 (0:00:00.143) 0:11:58.554 ********** 2026-04-17 06:07:14.384077 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384088 | orchestrator | 2026-04-17 06:07:14.384099 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:07:14.384110 | orchestrator | Friday 17 April 2026 06:06:55 +0000 (0:00:00.168) 0:11:58.723 ********** 2026-04-17 06:07:14.384121 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384132 | orchestrator | 2026-04-17 06:07:14.384142 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:07:14.384153 | orchestrator | Friday 17 April 2026 06:06:56 +0000 (0:00:00.136) 0:11:58.860 ********** 2026-04-17 06:07:14.384163 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384174 | orchestrator | 2026-04-17 06:07:14.384185 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:07:14.384195 | orchestrator | Friday 17 April 2026 06:06:56 +0000 (0:00:00.145) 0:11:59.006 ********** 2026-04-17 06:07:14.384206 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384217 | orchestrator | 2026-04-17 06:07:14.384227 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:07:14.384238 | orchestrator | Friday 17 April 2026 06:06:56 +0000 (0:00:00.135) 0:11:59.141 ********** 2026-04-17 06:07:14.384249 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384259 | orchestrator | 2026-04-17 06:07:14.384286 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:07:14.384298 | orchestrator | Friday 17 April 2026 06:06:56 +0000 (0:00:00.127) 0:11:59.269 ********** 2026-04-17 06:07:14.384309 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384320 | orchestrator | 2026-04-17 06:07:14.384330 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:07:14.384362 | orchestrator | Friday 17 April 2026 06:06:56 +0000 (0:00:00.147) 0:11:59.416 ********** 2026-04-17 06:07:14.384374 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384384 | orchestrator | 2026-04-17 06:07:14.384395 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:07:14.384406 | orchestrator | Friday 17 April 2026 06:06:56 +0000 (0:00:00.135) 0:11:59.552 ********** 2026-04-17 06:07:14.384417 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384427 | orchestrator | 2026-04-17 06:07:14.384438 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:07:14.384449 | orchestrator | Friday 17 April 2026 06:06:56 +0000 (0:00:00.147) 0:11:59.699 ********** 2026-04-17 06:07:14.384460 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384470 | orchestrator | 2026-04-17 06:07:14.384481 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:07:14.384492 | orchestrator | Friday 17 April 2026 06:06:57 +0000 (0:00:00.126) 0:11:59.825 ********** 2026-04-17 06:07:14.384502 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384513 | orchestrator | 2026-04-17 06:07:14.384524 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:07:14.384535 | orchestrator | Friday 17 April 2026 06:06:57 +0000 (0:00:00.144) 0:11:59.970 ********** 2026-04-17 06:07:14.384545 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384556 | orchestrator | 2026-04-17 06:07:14.384567 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:07:14.384577 | orchestrator | Friday 17 April 2026 06:06:57 +0000 (0:00:00.128) 0:12:00.099 ********** 2026-04-17 06:07:14.384588 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384599 | orchestrator | 2026-04-17 06:07:14.384609 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:07:14.384620 | orchestrator | Friday 17 April 2026 06:06:58 +0000 (0:00:01.063) 0:12:01.162 ********** 2026-04-17 06:07:14.384631 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384641 | orchestrator | 2026-04-17 06:07:14.384652 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:07:14.384663 | orchestrator | Friday 17 April 2026 06:06:58 +0000 (0:00:00.143) 0:12:01.305 ********** 2026-04-17 06:07:14.384673 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384684 | orchestrator | 2026-04-17 06:07:14.384695 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:07:14.384705 | orchestrator | Friday 17 April 2026 06:06:58 +0000 (0:00:00.259) 0:12:01.565 ********** 2026-04-17 06:07:14.384716 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384727 | orchestrator | 2026-04-17 06:07:14.384737 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:07:14.384748 | orchestrator | Friday 17 April 2026 06:06:58 +0000 (0:00:00.139) 0:12:01.704 ********** 2026-04-17 06:07:14.384758 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384769 | orchestrator | 2026-04-17 06:07:14.384781 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:07:14.384793 | orchestrator | Friday 17 April 2026 06:06:59 +0000 (0:00:00.133) 0:12:01.837 ********** 2026-04-17 06:07:14.384804 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384814 | orchestrator | 2026-04-17 06:07:14.384825 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:07:14.384836 | orchestrator | Friday 17 April 2026 06:06:59 +0000 (0:00:00.139) 0:12:01.977 ********** 2026-04-17 06:07:14.384847 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384857 | orchestrator | 2026-04-17 06:07:14.384868 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:07:14.384896 | orchestrator | Friday 17 April 2026 06:06:59 +0000 (0:00:00.150) 0:12:02.128 ********** 2026-04-17 06:07:14.384911 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.384929 | orchestrator | 2026-04-17 06:07:14.384980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:07:14.385000 | orchestrator | Friday 17 April 2026 06:06:59 +0000 (0:00:00.153) 0:12:02.281 ********** 2026-04-17 06:07:14.385021 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.385040 | orchestrator | 2026-04-17 06:07:14.385051 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:07:14.385062 | orchestrator | Friday 17 April 2026 06:06:59 +0000 (0:00:00.150) 0:12:02.431 ********** 2026-04-17 06:07:14.385079 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:07:14.385099 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:07:14.385115 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:07:14.385133 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.385151 | orchestrator | 2026-04-17 06:07:14.385170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:07:14.385189 | orchestrator | Friday 17 April 2026 06:07:00 +0000 (0:00:00.423) 0:12:02.855 ********** 2026-04-17 06:07:14.385201 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:07:14.385211 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:07:14.385222 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:07:14.385232 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.385243 | orchestrator | 2026-04-17 06:07:14.385254 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:07:14.385265 | orchestrator | Friday 17 April 2026 06:07:00 +0000 (0:00:00.440) 0:12:03.296 ********** 2026-04-17 06:07:14.385275 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 06:07:14.385293 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 06:07:14.385305 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 06:07:14.385315 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.385326 | orchestrator | 2026-04-17 06:07:14.385337 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:07:14.385347 | orchestrator | Friday 17 April 2026 06:07:01 +0000 (0:00:00.858) 0:12:04.155 ********** 2026-04-17 06:07:14.385358 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.385368 | orchestrator | 2026-04-17 06:07:14.385379 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:07:14.385389 | orchestrator | Friday 17 April 2026 06:07:01 +0000 (0:00:00.140) 0:12:04.295 ********** 2026-04-17 06:07:14.385401 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-17 06:07:14.385412 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.385422 | orchestrator | 2026-04-17 06:07:14.385433 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:07:14.385444 | orchestrator | Friday 17 April 2026 06:07:02 +0000 (0:00:01.177) 0:12:05.472 ********** 2026-04-17 06:07:14.385454 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:07:14.385465 | orchestrator | 2026-04-17 06:07:14.385475 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:07:14.385486 | orchestrator | Friday 17 April 2026 06:07:03 +0000 (0:00:00.926) 0:12:06.399 ********** 2026-04-17 06:07:14.385497 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:07:14.385508 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 06:07:14.385519 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:07:14.385530 | orchestrator | 2026-04-17 06:07:14.385540 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-17 06:07:14.385551 | orchestrator | Friday 17 April 2026 06:07:04 +0000 (0:00:00.728) 0:12:07.127 ********** 2026-04-17 06:07:14.385562 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-04-17 06:07:14.385572 | orchestrator | 2026-04-17 06:07:14.385583 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-17 06:07:14.385603 | orchestrator | Friday 17 April 2026 06:07:04 +0000 (0:00:00.209) 0:12:07.336 ********** 2026-04-17 06:07:14.385614 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:07:14.385624 | orchestrator | 2026-04-17 06:07:14.385635 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-17 06:07:14.385645 | orchestrator | Friday 17 April 2026 06:07:05 +0000 (0:00:00.496) 0:12:07.832 ********** 2026-04-17 06:07:14.385656 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:14.385667 | orchestrator | 2026-04-17 06:07:14.385677 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-17 06:07:14.385688 | orchestrator | Friday 17 April 2026 06:07:05 +0000 (0:00:00.157) 0:12:07.990 ********** 2026-04-17 06:07:14.385699 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:07:14.385709 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:07:14.385720 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:07:14.385730 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-04-17 06:07:14.385741 | orchestrator | 2026-04-17 06:07:14.385751 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-17 06:07:14.385762 | orchestrator | Friday 17 April 2026 06:07:11 +0000 (0:00:06.645) 0:12:14.636 ********** 2026-04-17 06:07:14.385773 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:07:14.385783 | orchestrator | 2026-04-17 06:07:14.385794 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-17 06:07:14.385805 | orchestrator | Friday 17 April 2026 06:07:12 +0000 (0:00:00.213) 0:12:14.849 ********** 2026-04-17 06:07:14.385815 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 06:07:14.385826 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 06:07:14.385837 | orchestrator | 2026-04-17 06:07:14.385855 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:07:36.159540 | orchestrator | Friday 17 April 2026 06:07:14 +0000 (0:00:02.275) 0:12:17.125 ********** 2026-04-17 06:07:36.159657 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 06:07:36.159675 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-17 06:07:36.159687 | orchestrator | 2026-04-17 06:07:36.159699 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-17 06:07:36.159710 | orchestrator | Friday 17 April 2026 06:07:15 +0000 (0:00:01.063) 0:12:18.188 ********** 2026-04-17 06:07:36.159721 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:07:36.159732 | orchestrator | 2026-04-17 06:07:36.159743 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-17 06:07:36.159754 | orchestrator | Friday 17 April 2026 06:07:16 +0000 (0:00:00.857) 0:12:19.046 ********** 2026-04-17 06:07:36.159765 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:36.159776 | orchestrator | 2026-04-17 06:07:36.159786 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-17 06:07:36.159797 | orchestrator | Friday 17 April 2026 06:07:16 +0000 (0:00:00.140) 0:12:19.186 ********** 2026-04-17 06:07:36.159808 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:36.159819 | orchestrator | 2026-04-17 06:07:36.159829 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-17 06:07:36.159840 | orchestrator | Friday 17 April 2026 06:07:16 +0000 (0:00:00.146) 0:12:19.333 ********** 2026-04-17 06:07:36.159851 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-04-17 06:07:36.159862 | orchestrator | 2026-04-17 06:07:36.159873 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-17 06:07:36.159884 | orchestrator | Friday 17 April 2026 06:07:16 +0000 (0:00:00.192) 0:12:19.526 ********** 2026-04-17 06:07:36.159923 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:36.159935 | orchestrator | 2026-04-17 06:07:36.159946 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-17 06:07:36.160026 | orchestrator | Friday 17 April 2026 06:07:16 +0000 (0:00:00.173) 0:12:19.700 ********** 2026-04-17 06:07:36.160038 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:36.160049 | orchestrator | 2026-04-17 06:07:36.160060 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-17 06:07:36.160073 | orchestrator | Friday 17 April 2026 06:07:17 +0000 (0:00:00.176) 0:12:19.876 ********** 2026-04-17 06:07:36.160086 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-04-17 06:07:36.160099 | orchestrator | 2026-04-17 06:07:36.160111 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-17 06:07:36.160123 | orchestrator | Friday 17 April 2026 06:07:17 +0000 (0:00:00.219) 0:12:20.095 ********** 2026-04-17 06:07:36.160136 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:07:36.160149 | orchestrator | 2026-04-17 06:07:36.160162 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-17 06:07:36.160175 | orchestrator | Friday 17 April 2026 06:07:18 +0000 (0:00:01.137) 0:12:21.233 ********** 2026-04-17 06:07:36.160187 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:07:36.160199 | orchestrator | 2026-04-17 06:07:36.160212 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-17 06:07:36.160224 | orchestrator | Friday 17 April 2026 06:07:19 +0000 (0:00:00.955) 0:12:22.189 ********** 2026-04-17 06:07:36.160236 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:07:36.160249 | orchestrator | 2026-04-17 06:07:36.160262 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-17 06:07:36.160275 | orchestrator | Friday 17 April 2026 06:07:20 +0000 (0:00:01.403) 0:12:23.592 ********** 2026-04-17 06:07:36.160288 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:07:36.160301 | orchestrator | 2026-04-17 06:07:36.160313 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-17 06:07:36.160327 | orchestrator | Friday 17 April 2026 06:07:24 +0000 (0:00:03.940) 0:12:27.532 ********** 2026-04-17 06:07:36.160339 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:07:36.160352 | orchestrator | 2026-04-17 06:07:36.160364 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-17 06:07:36.160377 | orchestrator | 2026-04-17 06:07:36.160390 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-17 06:07:36.160403 | orchestrator | Friday 17 April 2026 06:07:25 +0000 (0:00:01.069) 0:12:28.602 ********** 2026-04-17 06:07:36.160415 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:07:36.160428 | orchestrator | 2026-04-17 06:07:36.160442 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-17 06:07:36.160453 | orchestrator | Friday 17 April 2026 06:07:27 +0000 (0:00:01.935) 0:12:30.538 ********** 2026-04-17 06:07:36.160464 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:07:36.160475 | orchestrator | 2026-04-17 06:07:36.160485 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:07:36.160497 | orchestrator | Friday 17 April 2026 06:07:29 +0000 (0:00:01.596) 0:12:32.135 ********** 2026-04-17 06:07:36.160507 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-17 06:07:36.160518 | orchestrator | 2026-04-17 06:07:36.160529 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:07:36.160540 | orchestrator | Friday 17 April 2026 06:07:29 +0000 (0:00:00.274) 0:12:32.410 ********** 2026-04-17 06:07:36.160551 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:36.160561 | orchestrator | 2026-04-17 06:07:36.160572 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:07:36.160583 | orchestrator | Friday 17 April 2026 06:07:30 +0000 (0:00:00.468) 0:12:32.879 ********** 2026-04-17 06:07:36.160594 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:36.160605 | orchestrator | 2026-04-17 06:07:36.160616 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:07:36.160627 | orchestrator | Friday 17 April 2026 06:07:30 +0000 (0:00:00.165) 0:12:33.044 ********** 2026-04-17 06:07:36.160644 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:36.160655 | orchestrator | 2026-04-17 06:07:36.160665 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:07:36.160693 | orchestrator | Friday 17 April 2026 06:07:30 +0000 (0:00:00.531) 0:12:33.576 ********** 2026-04-17 06:07:36.160704 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:36.160715 | orchestrator | 2026-04-17 06:07:36.160726 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:07:36.160738 | orchestrator | Friday 17 April 2026 06:07:30 +0000 (0:00:00.151) 0:12:33.727 ********** 2026-04-17 06:07:36.160749 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:36.160759 | orchestrator | 2026-04-17 06:07:36.160770 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:07:36.160781 | orchestrator | Friday 17 April 2026 06:07:31 +0000 (0:00:00.151) 0:12:33.878 ********** 2026-04-17 06:07:36.160792 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:36.160802 | orchestrator | 2026-04-17 06:07:36.160813 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:07:36.160824 | orchestrator | Friday 17 April 2026 06:07:31 +0000 (0:00:00.170) 0:12:34.049 ********** 2026-04-17 06:07:36.160835 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:36.160845 | orchestrator | 2026-04-17 06:07:36.160856 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:07:36.160867 | orchestrator | Friday 17 April 2026 06:07:31 +0000 (0:00:00.506) 0:12:34.555 ********** 2026-04-17 06:07:36.160878 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:36.160888 | orchestrator | 2026-04-17 06:07:36.160899 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:07:36.160910 | orchestrator | Friday 17 April 2026 06:07:31 +0000 (0:00:00.156) 0:12:34.712 ********** 2026-04-17 06:07:36.160920 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:07:36.160931 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:07:36.160947 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:07:36.160958 | orchestrator | 2026-04-17 06:07:36.161000 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:07:36.161011 | orchestrator | Friday 17 April 2026 06:07:32 +0000 (0:00:00.751) 0:12:35.464 ********** 2026-04-17 06:07:36.161022 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:36.161033 | orchestrator | 2026-04-17 06:07:36.161044 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:07:36.161054 | orchestrator | Friday 17 April 2026 06:07:32 +0000 (0:00:00.251) 0:12:35.716 ********** 2026-04-17 06:07:36.161065 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:07:36.161076 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:07:36.161087 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:07:36.161098 | orchestrator | 2026-04-17 06:07:36.161108 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:07:36.161119 | orchestrator | Friday 17 April 2026 06:07:34 +0000 (0:00:01.954) 0:12:37.671 ********** 2026-04-17 06:07:36.161130 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 06:07:36.161141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 06:07:36.161152 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 06:07:36.161163 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:36.161174 | orchestrator | 2026-04-17 06:07:36.161184 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:07:36.161195 | orchestrator | Friday 17 April 2026 06:07:35 +0000 (0:00:00.466) 0:12:38.138 ********** 2026-04-17 06:07:36.161207 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:07:36.161229 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:07:36.161240 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:07:36.161252 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:36.161263 | orchestrator | 2026-04-17 06:07:36.161274 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:07:36.161285 | orchestrator | Friday 17 April 2026 06:07:36 +0000 (0:00:00.687) 0:12:38.825 ********** 2026-04-17 06:07:36.161297 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:36.161319 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:40.869305 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:40.869409 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.869425 | orchestrator | 2026-04-17 06:07:40.869438 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:07:40.869450 | orchestrator | Friday 17 April 2026 06:07:36 +0000 (0:00:00.193) 0:12:39.019 ********** 2026-04-17 06:07:40.869481 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:07:33.527809', 'end': '2026-04-17 06:07:33.575719', 'delta': '0:00:00.047910', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:07:40.869496 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:07:34.151360', 'end': '2026-04-17 06:07:34.190791', 'delta': '0:00:00.039431', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:07:40.869531 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:07:34.704579', 'end': '2026-04-17 06:07:34.753197', 'delta': '0:00:00.048618', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:07:40.869543 | orchestrator | 2026-04-17 06:07:40.869555 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:07:40.869566 | orchestrator | Friday 17 April 2026 06:07:36 +0000 (0:00:00.261) 0:12:39.281 ********** 2026-04-17 06:07:40.869577 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:40.869589 | orchestrator | 2026-04-17 06:07:40.869600 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:07:40.869610 | orchestrator | Friday 17 April 2026 06:07:36 +0000 (0:00:00.273) 0:12:39.554 ********** 2026-04-17 06:07:40.869628 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.869646 | orchestrator | 2026-04-17 06:07:40.869665 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:07:40.869683 | orchestrator | Friday 17 April 2026 06:07:37 +0000 (0:00:00.276) 0:12:39.830 ********** 2026-04-17 06:07:40.869701 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:40.869718 | orchestrator | 2026-04-17 06:07:40.869735 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:07:40.869752 | orchestrator | Friday 17 April 2026 06:07:37 +0000 (0:00:00.162) 0:12:39.993 ********** 2026-04-17 06:07:40.869770 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:07:40.869789 | orchestrator | 2026-04-17 06:07:40.869807 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:07:40.869826 | orchestrator | Friday 17 April 2026 06:07:38 +0000 (0:00:01.472) 0:12:41.465 ********** 2026-04-17 06:07:40.869840 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:40.869852 | orchestrator | 2026-04-17 06:07:40.869864 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:07:40.869876 | orchestrator | Friday 17 April 2026 06:07:39 +0000 (0:00:00.576) 0:12:42.041 ********** 2026-04-17 06:07:40.869907 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.869921 | orchestrator | 2026-04-17 06:07:40.869933 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:07:40.869946 | orchestrator | Friday 17 April 2026 06:07:39 +0000 (0:00:00.134) 0:12:42.176 ********** 2026-04-17 06:07:40.869958 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.869970 | orchestrator | 2026-04-17 06:07:40.870011 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:07:40.870085 | orchestrator | Friday 17 April 2026 06:07:39 +0000 (0:00:00.252) 0:12:42.429 ********** 2026-04-17 06:07:40.870097 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.870110 | orchestrator | 2026-04-17 06:07:40.870122 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:07:40.870135 | orchestrator | Friday 17 April 2026 06:07:39 +0000 (0:00:00.131) 0:12:42.560 ********** 2026-04-17 06:07:40.870148 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.870195 | orchestrator | 2026-04-17 06:07:40.870208 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:07:40.870221 | orchestrator | Friday 17 April 2026 06:07:39 +0000 (0:00:00.145) 0:12:42.706 ********** 2026-04-17 06:07:40.870247 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.870257 | orchestrator | 2026-04-17 06:07:40.870268 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:07:40.870278 | orchestrator | Friday 17 April 2026 06:07:40 +0000 (0:00:00.163) 0:12:42.869 ********** 2026-04-17 06:07:40.870289 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.870300 | orchestrator | 2026-04-17 06:07:40.870311 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:07:40.870329 | orchestrator | Friday 17 April 2026 06:07:40 +0000 (0:00:00.155) 0:12:43.025 ********** 2026-04-17 06:07:40.870340 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.870350 | orchestrator | 2026-04-17 06:07:40.870361 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:07:40.870371 | orchestrator | Friday 17 April 2026 06:07:40 +0000 (0:00:00.135) 0:12:43.161 ********** 2026-04-17 06:07:40.870382 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.870392 | orchestrator | 2026-04-17 06:07:40.870403 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:07:40.870415 | orchestrator | Friday 17 April 2026 06:07:40 +0000 (0:00:00.161) 0:12:43.322 ********** 2026-04-17 06:07:40.870426 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:40.870436 | orchestrator | 2026-04-17 06:07:40.870447 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:07:40.870458 | orchestrator | Friday 17 April 2026 06:07:40 +0000 (0:00:00.150) 0:12:43.473 ********** 2026-04-17 06:07:40.870469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:07:40.870482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:07:40.870493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:07:40.870505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-36-58-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:07:40.870518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:07:40.870539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:07:41.129590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:07:41.129704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60cf27b4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:07:41.129724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:07:41.129736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:07:41.129748 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:41.129760 | orchestrator | 2026-04-17 06:07:41.129773 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:07:41.129806 | orchestrator | Friday 17 April 2026 06:07:40 +0000 (0:00:00.271) 0:12:43.744 ********** 2026-04-17 06:07:41.129836 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:41.129856 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:41.129867 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:41.129879 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-36-58-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:41.129891 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:41.129902 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:41.129929 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:41.129958 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '60cf27b4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1', 'scsi-SQEMU_QEMU_HARDDISK_60cf27b4-7c66-4d7c-95df-912b136ea49d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:51.126375 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:51.126490 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:07:51.126534 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.126548 | orchestrator | 2026-04-17 06:07:51.126561 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:07:51.126572 | orchestrator | Friday 17 April 2026 06:07:41 +0000 (0:00:00.254) 0:12:43.999 ********** 2026-04-17 06:07:51.126583 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:51.126594 | orchestrator | 2026-04-17 06:07:51.126605 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:07:51.126616 | orchestrator | Friday 17 April 2026 06:07:41 +0000 (0:00:00.481) 0:12:44.481 ********** 2026-04-17 06:07:51.126627 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:51.126638 | orchestrator | 2026-04-17 06:07:51.126649 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:07:51.126659 | orchestrator | Friday 17 April 2026 06:07:42 +0000 (0:00:00.595) 0:12:45.077 ********** 2026-04-17 06:07:51.126670 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:51.126681 | orchestrator | 2026-04-17 06:07:51.126692 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:07:51.126702 | orchestrator | Friday 17 April 2026 06:07:42 +0000 (0:00:00.548) 0:12:45.625 ********** 2026-04-17 06:07:51.126713 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.126724 | orchestrator | 2026-04-17 06:07:51.126735 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:07:51.126745 | orchestrator | Friday 17 April 2026 06:07:43 +0000 (0:00:00.146) 0:12:45.772 ********** 2026-04-17 06:07:51.126756 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.126767 | orchestrator | 2026-04-17 06:07:51.126777 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:07:51.126788 | orchestrator | Friday 17 April 2026 06:07:43 +0000 (0:00:00.245) 0:12:46.017 ********** 2026-04-17 06:07:51.126799 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.126809 | orchestrator | 2026-04-17 06:07:51.126820 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:07:51.126845 | orchestrator | Friday 17 April 2026 06:07:43 +0000 (0:00:00.149) 0:12:46.166 ********** 2026-04-17 06:07:51.126857 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-17 06:07:51.126870 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-17 06:07:51.126881 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:07:51.126891 | orchestrator | 2026-04-17 06:07:51.126902 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:07:51.126915 | orchestrator | Friday 17 April 2026 06:07:44 +0000 (0:00:00.648) 0:12:46.815 ********** 2026-04-17 06:07:51.126929 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 06:07:51.126943 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 06:07:51.126955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 06:07:51.126968 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.126980 | orchestrator | 2026-04-17 06:07:51.126992 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:07:51.127005 | orchestrator | Friday 17 April 2026 06:07:44 +0000 (0:00:00.151) 0:12:46.967 ********** 2026-04-17 06:07:51.127018 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127065 | orchestrator | 2026-04-17 06:07:51.127077 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:07:51.127088 | orchestrator | Friday 17 April 2026 06:07:44 +0000 (0:00:00.136) 0:12:47.104 ********** 2026-04-17 06:07:51.127098 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:07:51.127110 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:07:51.127131 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:07:51.127142 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:07:51.127153 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:07:51.127165 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:07:51.127193 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:07:51.127205 | orchestrator | 2026-04-17 06:07:51.127215 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:07:51.127226 | orchestrator | Friday 17 April 2026 06:07:45 +0000 (0:00:01.001) 0:12:48.106 ********** 2026-04-17 06:07:51.127237 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:07:51.127247 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:07:51.127258 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:07:51.127269 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:07:51.127280 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:07:51.127290 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:07:51.127301 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:07:51.127312 | orchestrator | 2026-04-17 06:07:51.127323 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:07:51.127333 | orchestrator | Friday 17 April 2026 06:07:46 +0000 (0:00:01.562) 0:12:49.668 ********** 2026-04-17 06:07:51.127344 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-17 06:07:51.127356 | orchestrator | 2026-04-17 06:07:51.127366 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:07:51.127377 | orchestrator | Friday 17 April 2026 06:07:47 +0000 (0:00:00.187) 0:12:49.856 ********** 2026-04-17 06:07:51.127388 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-17 06:07:51.127399 | orchestrator | 2026-04-17 06:07:51.127410 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:07:51.127421 | orchestrator | Friday 17 April 2026 06:07:47 +0000 (0:00:00.421) 0:12:50.277 ********** 2026-04-17 06:07:51.127431 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:51.127442 | orchestrator | 2026-04-17 06:07:51.127453 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:07:51.127464 | orchestrator | Friday 17 April 2026 06:07:48 +0000 (0:00:00.522) 0:12:50.800 ********** 2026-04-17 06:07:51.127474 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127485 | orchestrator | 2026-04-17 06:07:51.127496 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:07:51.127507 | orchestrator | Friday 17 April 2026 06:07:48 +0000 (0:00:00.129) 0:12:50.929 ********** 2026-04-17 06:07:51.127517 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127528 | orchestrator | 2026-04-17 06:07:51.127539 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:07:51.127550 | orchestrator | Friday 17 April 2026 06:07:48 +0000 (0:00:00.132) 0:12:51.062 ********** 2026-04-17 06:07:51.127560 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127571 | orchestrator | 2026-04-17 06:07:51.127582 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:07:51.127592 | orchestrator | Friday 17 April 2026 06:07:48 +0000 (0:00:00.118) 0:12:51.180 ********** 2026-04-17 06:07:51.127603 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:51.127613 | orchestrator | 2026-04-17 06:07:51.127624 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:07:51.127642 | orchestrator | Friday 17 April 2026 06:07:48 +0000 (0:00:00.466) 0:12:51.646 ********** 2026-04-17 06:07:51.127653 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127664 | orchestrator | 2026-04-17 06:07:51.127680 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:07:51.127691 | orchestrator | Friday 17 April 2026 06:07:49 +0000 (0:00:00.115) 0:12:51.762 ********** 2026-04-17 06:07:51.127702 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127712 | orchestrator | 2026-04-17 06:07:51.127723 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:07:51.127734 | orchestrator | Friday 17 April 2026 06:07:49 +0000 (0:00:00.117) 0:12:51.879 ********** 2026-04-17 06:07:51.127744 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:51.127755 | orchestrator | 2026-04-17 06:07:51.127765 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:07:51.127776 | orchestrator | Friday 17 April 2026 06:07:49 +0000 (0:00:00.479) 0:12:52.358 ********** 2026-04-17 06:07:51.127787 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:51.127797 | orchestrator | 2026-04-17 06:07:51.127808 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:07:51.127819 | orchestrator | Friday 17 April 2026 06:07:50 +0000 (0:00:00.472) 0:12:52.831 ********** 2026-04-17 06:07:51.127829 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127840 | orchestrator | 2026-04-17 06:07:51.127851 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:07:51.127861 | orchestrator | Friday 17 April 2026 06:07:50 +0000 (0:00:00.127) 0:12:52.958 ********** 2026-04-17 06:07:51.127872 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:07:51.127883 | orchestrator | 2026-04-17 06:07:51.127893 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:07:51.127904 | orchestrator | Friday 17 April 2026 06:07:50 +0000 (0:00:00.160) 0:12:53.118 ********** 2026-04-17 06:07:51.127914 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127925 | orchestrator | 2026-04-17 06:07:51.127936 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:07:51.127946 | orchestrator | Friday 17 April 2026 06:07:50 +0000 (0:00:00.147) 0:12:53.266 ********** 2026-04-17 06:07:51.127957 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:07:51.127968 | orchestrator | 2026-04-17 06:07:51.127978 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:07:51.127989 | orchestrator | Friday 17 April 2026 06:07:51 +0000 (0:00:00.553) 0:12:53.819 ********** 2026-04-17 06:07:51.128012 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728302 | orchestrator | 2026-04-17 06:08:03.728423 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:08:03.728442 | orchestrator | Friday 17 April 2026 06:07:51 +0000 (0:00:00.140) 0:12:53.960 ********** 2026-04-17 06:08:03.728455 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728467 | orchestrator | 2026-04-17 06:08:03.728478 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:08:03.728490 | orchestrator | Friday 17 April 2026 06:07:51 +0000 (0:00:00.145) 0:12:54.106 ********** 2026-04-17 06:08:03.728501 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728511 | orchestrator | 2026-04-17 06:08:03.728522 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:08:03.728534 | orchestrator | Friday 17 April 2026 06:07:51 +0000 (0:00:00.136) 0:12:54.242 ********** 2026-04-17 06:08:03.728545 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.728556 | orchestrator | 2026-04-17 06:08:03.728567 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:08:03.728578 | orchestrator | Friday 17 April 2026 06:07:51 +0000 (0:00:00.162) 0:12:54.405 ********** 2026-04-17 06:08:03.728588 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.728599 | orchestrator | 2026-04-17 06:08:03.728610 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:08:03.728644 | orchestrator | Friday 17 April 2026 06:07:51 +0000 (0:00:00.161) 0:12:54.566 ********** 2026-04-17 06:08:03.728655 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.728666 | orchestrator | 2026-04-17 06:08:03.728677 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:08:03.728688 | orchestrator | Friday 17 April 2026 06:07:52 +0000 (0:00:00.266) 0:12:54.832 ********** 2026-04-17 06:08:03.728699 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728710 | orchestrator | 2026-04-17 06:08:03.728720 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:08:03.728731 | orchestrator | Friday 17 April 2026 06:07:52 +0000 (0:00:00.136) 0:12:54.969 ********** 2026-04-17 06:08:03.728742 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728752 | orchestrator | 2026-04-17 06:08:03.728763 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:08:03.728774 | orchestrator | Friday 17 April 2026 06:07:52 +0000 (0:00:00.130) 0:12:55.100 ********** 2026-04-17 06:08:03.728785 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728795 | orchestrator | 2026-04-17 06:08:03.728806 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:08:03.728817 | orchestrator | Friday 17 April 2026 06:07:52 +0000 (0:00:00.136) 0:12:55.236 ********** 2026-04-17 06:08:03.728830 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728843 | orchestrator | 2026-04-17 06:08:03.728857 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:08:03.728870 | orchestrator | Friday 17 April 2026 06:07:52 +0000 (0:00:00.162) 0:12:55.399 ********** 2026-04-17 06:08:03.728884 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728897 | orchestrator | 2026-04-17 06:08:03.728910 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:08:03.728924 | orchestrator | Friday 17 April 2026 06:07:52 +0000 (0:00:00.160) 0:12:55.560 ********** 2026-04-17 06:08:03.728937 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.728950 | orchestrator | 2026-04-17 06:08:03.728963 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:08:03.728976 | orchestrator | Friday 17 April 2026 06:07:53 +0000 (0:00:00.536) 0:12:56.097 ********** 2026-04-17 06:08:03.728989 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729002 | orchestrator | 2026-04-17 06:08:03.729016 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:08:03.729044 | orchestrator | Friday 17 April 2026 06:07:53 +0000 (0:00:00.142) 0:12:56.240 ********** 2026-04-17 06:08:03.729058 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729071 | orchestrator | 2026-04-17 06:08:03.729116 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:08:03.729137 | orchestrator | Friday 17 April 2026 06:07:53 +0000 (0:00:00.129) 0:12:56.369 ********** 2026-04-17 06:08:03.729157 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729177 | orchestrator | 2026-04-17 06:08:03.729191 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:08:03.729202 | orchestrator | Friday 17 April 2026 06:07:53 +0000 (0:00:00.142) 0:12:56.512 ********** 2026-04-17 06:08:03.729212 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729223 | orchestrator | 2026-04-17 06:08:03.729234 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:08:03.729245 | orchestrator | Friday 17 April 2026 06:07:53 +0000 (0:00:00.146) 0:12:56.658 ********** 2026-04-17 06:08:03.729255 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729266 | orchestrator | 2026-04-17 06:08:03.729276 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:08:03.729287 | orchestrator | Friday 17 April 2026 06:07:54 +0000 (0:00:00.130) 0:12:56.789 ********** 2026-04-17 06:08:03.729298 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729309 | orchestrator | 2026-04-17 06:08:03.729319 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:08:03.729340 | orchestrator | Friday 17 April 2026 06:07:54 +0000 (0:00:00.218) 0:12:57.008 ********** 2026-04-17 06:08:03.729351 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.729362 | orchestrator | 2026-04-17 06:08:03.729373 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:08:03.729384 | orchestrator | Friday 17 April 2026 06:07:55 +0000 (0:00:00.974) 0:12:57.982 ********** 2026-04-17 06:08:03.729394 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.729405 | orchestrator | 2026-04-17 06:08:03.729415 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:08:03.729426 | orchestrator | Friday 17 April 2026 06:07:56 +0000 (0:00:01.390) 0:12:59.373 ********** 2026-04-17 06:08:03.729437 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-17 06:08:03.729449 | orchestrator | 2026-04-17 06:08:03.729478 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:08:03.729490 | orchestrator | Friday 17 April 2026 06:07:56 +0000 (0:00:00.226) 0:12:59.600 ********** 2026-04-17 06:08:03.729501 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729512 | orchestrator | 2026-04-17 06:08:03.729523 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:08:03.729534 | orchestrator | Friday 17 April 2026 06:07:57 +0000 (0:00:00.168) 0:12:59.768 ********** 2026-04-17 06:08:03.729544 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729555 | orchestrator | 2026-04-17 06:08:03.729565 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:08:03.729576 | orchestrator | Friday 17 April 2026 06:07:57 +0000 (0:00:00.135) 0:12:59.904 ********** 2026-04-17 06:08:03.729587 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:08:03.729597 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:08:03.729608 | orchestrator | 2026-04-17 06:08:03.729619 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:08:03.729630 | orchestrator | Friday 17 April 2026 06:07:58 +0000 (0:00:01.300) 0:13:01.204 ********** 2026-04-17 06:08:03.729641 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.729651 | orchestrator | 2026-04-17 06:08:03.729662 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:08:03.729673 | orchestrator | Friday 17 April 2026 06:07:58 +0000 (0:00:00.511) 0:13:01.715 ********** 2026-04-17 06:08:03.729684 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729695 | orchestrator | 2026-04-17 06:08:03.729705 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:08:03.729716 | orchestrator | Friday 17 April 2026 06:07:59 +0000 (0:00:00.161) 0:13:01.877 ********** 2026-04-17 06:08:03.729734 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729752 | orchestrator | 2026-04-17 06:08:03.729769 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:08:03.729786 | orchestrator | Friday 17 April 2026 06:07:59 +0000 (0:00:00.148) 0:13:02.026 ********** 2026-04-17 06:08:03.729803 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.729822 | orchestrator | 2026-04-17 06:08:03.729840 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:08:03.729858 | orchestrator | Friday 17 April 2026 06:07:59 +0000 (0:00:00.141) 0:13:02.167 ********** 2026-04-17 06:08:03.729876 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-17 06:08:03.729895 | orchestrator | 2026-04-17 06:08:03.729915 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:08:03.729932 | orchestrator | Friday 17 April 2026 06:07:59 +0000 (0:00:00.214) 0:13:02.381 ********** 2026-04-17 06:08:03.729952 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.729969 | orchestrator | 2026-04-17 06:08:03.729988 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:08:03.730126 | orchestrator | Friday 17 April 2026 06:08:00 +0000 (0:00:00.717) 0:13:03.099 ********** 2026-04-17 06:08:03.730154 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:08:03.730173 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:08:03.730190 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:08:03.730258 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.730278 | orchestrator | 2026-04-17 06:08:03.730296 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:08:03.730328 | orchestrator | Friday 17 April 2026 06:08:00 +0000 (0:00:00.158) 0:13:03.258 ********** 2026-04-17 06:08:03.730348 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.730367 | orchestrator | 2026-04-17 06:08:03.730385 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:08:03.730405 | orchestrator | Friday 17 April 2026 06:08:00 +0000 (0:00:00.132) 0:13:03.391 ********** 2026-04-17 06:08:03.730423 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.730442 | orchestrator | 2026-04-17 06:08:03.730462 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:08:03.730480 | orchestrator | Friday 17 April 2026 06:08:00 +0000 (0:00:00.176) 0:13:03.567 ********** 2026-04-17 06:08:03.730499 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.730510 | orchestrator | 2026-04-17 06:08:03.730521 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:08:03.730532 | orchestrator | Friday 17 April 2026 06:08:00 +0000 (0:00:00.151) 0:13:03.719 ********** 2026-04-17 06:08:03.730543 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.730553 | orchestrator | 2026-04-17 06:08:03.730564 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:08:03.730575 | orchestrator | Friday 17 April 2026 06:08:01 +0000 (0:00:00.149) 0:13:03.868 ********** 2026-04-17 06:08:03.730585 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:03.730596 | orchestrator | 2026-04-17 06:08:03.730607 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:08:03.730617 | orchestrator | Friday 17 April 2026 06:08:01 +0000 (0:00:00.552) 0:13:04.420 ********** 2026-04-17 06:08:03.730628 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.730639 | orchestrator | 2026-04-17 06:08:03.730650 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:08:03.730661 | orchestrator | Friday 17 April 2026 06:08:03 +0000 (0:00:01.651) 0:13:06.072 ********** 2026-04-17 06:08:03.730671 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:03.730682 | orchestrator | 2026-04-17 06:08:03.730693 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:08:03.730704 | orchestrator | Friday 17 April 2026 06:08:03 +0000 (0:00:00.168) 0:13:06.241 ********** 2026-04-17 06:08:03.730715 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-17 06:08:03.730726 | orchestrator | 2026-04-17 06:08:03.730750 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:08:16.871617 | orchestrator | Friday 17 April 2026 06:08:03 +0000 (0:00:00.223) 0:13:06.464 ********** 2026-04-17 06:08:16.871710 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.871721 | orchestrator | 2026-04-17 06:08:16.871729 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:08:16.871736 | orchestrator | Friday 17 April 2026 06:08:03 +0000 (0:00:00.139) 0:13:06.604 ********** 2026-04-17 06:08:16.871743 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.871750 | orchestrator | 2026-04-17 06:08:16.871756 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:08:16.871763 | orchestrator | Friday 17 April 2026 06:08:04 +0000 (0:00:00.166) 0:13:06.770 ********** 2026-04-17 06:08:16.871770 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.871794 | orchestrator | 2026-04-17 06:08:16.871801 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:08:16.871808 | orchestrator | Friday 17 April 2026 06:08:04 +0000 (0:00:00.186) 0:13:06.956 ********** 2026-04-17 06:08:16.871814 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.871821 | orchestrator | 2026-04-17 06:08:16.871827 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:08:16.871833 | orchestrator | Friday 17 April 2026 06:08:04 +0000 (0:00:00.160) 0:13:07.117 ********** 2026-04-17 06:08:16.871840 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.871846 | orchestrator | 2026-04-17 06:08:16.871853 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:08:16.871859 | orchestrator | Friday 17 April 2026 06:08:04 +0000 (0:00:00.167) 0:13:07.285 ********** 2026-04-17 06:08:16.871865 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.871872 | orchestrator | 2026-04-17 06:08:16.871878 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:08:16.871885 | orchestrator | Friday 17 April 2026 06:08:04 +0000 (0:00:00.154) 0:13:07.439 ********** 2026-04-17 06:08:16.871891 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.871897 | orchestrator | 2026-04-17 06:08:16.871904 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:08:16.871910 | orchestrator | Friday 17 April 2026 06:08:04 +0000 (0:00:00.159) 0:13:07.599 ********** 2026-04-17 06:08:16.871917 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.871923 | orchestrator | 2026-04-17 06:08:16.871930 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:08:16.871936 | orchestrator | Friday 17 April 2026 06:08:05 +0000 (0:00:00.163) 0:13:07.763 ********** 2026-04-17 06:08:16.871942 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:16.871949 | orchestrator | 2026-04-17 06:08:16.871956 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:08:16.871962 | orchestrator | Friday 17 April 2026 06:08:05 +0000 (0:00:00.638) 0:13:08.401 ********** 2026-04-17 06:08:16.871969 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-17 06:08:16.871976 | orchestrator | 2026-04-17 06:08:16.871982 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:08:16.871989 | orchestrator | Friday 17 April 2026 06:08:05 +0000 (0:00:00.250) 0:13:08.652 ********** 2026-04-17 06:08:16.871995 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-17 06:08:16.872002 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-17 06:08:16.872009 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-17 06:08:16.872015 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-17 06:08:16.872021 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-17 06:08:16.872039 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-17 06:08:16.872046 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-17 06:08:16.872053 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:08:16.872060 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:08:16.872066 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:08:16.872072 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:08:16.872079 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:08:16.872085 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:08:16.872091 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:08:16.872098 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-17 06:08:16.872104 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-17 06:08:16.872110 | orchestrator | 2026-04-17 06:08:16.872117 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:08:16.872181 | orchestrator | Friday 17 April 2026 06:08:11 +0000 (0:00:05.851) 0:13:14.503 ********** 2026-04-17 06:08:16.872190 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872197 | orchestrator | 2026-04-17 06:08:16.872204 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:08:16.872211 | orchestrator | Friday 17 April 2026 06:08:11 +0000 (0:00:00.137) 0:13:14.641 ********** 2026-04-17 06:08:16.872218 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872225 | orchestrator | 2026-04-17 06:08:16.872232 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:08:16.872239 | orchestrator | Friday 17 April 2026 06:08:12 +0000 (0:00:00.164) 0:13:14.805 ********** 2026-04-17 06:08:16.872246 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872253 | orchestrator | 2026-04-17 06:08:16.872260 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:08:16.872268 | orchestrator | Friday 17 April 2026 06:08:12 +0000 (0:00:00.146) 0:13:14.952 ********** 2026-04-17 06:08:16.872275 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872283 | orchestrator | 2026-04-17 06:08:16.872290 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:08:16.872310 | orchestrator | Friday 17 April 2026 06:08:12 +0000 (0:00:00.127) 0:13:15.079 ********** 2026-04-17 06:08:16.872317 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872325 | orchestrator | 2026-04-17 06:08:16.872332 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:08:16.872339 | orchestrator | Friday 17 April 2026 06:08:12 +0000 (0:00:00.122) 0:13:15.202 ********** 2026-04-17 06:08:16.872347 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872354 | orchestrator | 2026-04-17 06:08:16.872361 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:08:16.872369 | orchestrator | Friday 17 April 2026 06:08:12 +0000 (0:00:00.127) 0:13:15.329 ********** 2026-04-17 06:08:16.872376 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872383 | orchestrator | 2026-04-17 06:08:16.872390 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:08:16.872397 | orchestrator | Friday 17 April 2026 06:08:12 +0000 (0:00:00.145) 0:13:15.474 ********** 2026-04-17 06:08:16.872404 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872411 | orchestrator | 2026-04-17 06:08:16.872418 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:08:16.872426 | orchestrator | Friday 17 April 2026 06:08:12 +0000 (0:00:00.146) 0:13:15.621 ********** 2026-04-17 06:08:16.872433 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872440 | orchestrator | 2026-04-17 06:08:16.872448 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:08:16.872455 | orchestrator | Friday 17 April 2026 06:08:13 +0000 (0:00:00.522) 0:13:16.143 ********** 2026-04-17 06:08:16.872462 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872469 | orchestrator | 2026-04-17 06:08:16.872477 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:08:16.872484 | orchestrator | Friday 17 April 2026 06:08:13 +0000 (0:00:00.159) 0:13:16.303 ********** 2026-04-17 06:08:16.872491 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872498 | orchestrator | 2026-04-17 06:08:16.872505 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:08:16.872513 | orchestrator | Friday 17 April 2026 06:08:13 +0000 (0:00:00.135) 0:13:16.438 ********** 2026-04-17 06:08:16.872520 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872527 | orchestrator | 2026-04-17 06:08:16.872535 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:08:16.872541 | orchestrator | Friday 17 April 2026 06:08:13 +0000 (0:00:00.141) 0:13:16.580 ********** 2026-04-17 06:08:16.872547 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872560 | orchestrator | 2026-04-17 06:08:16.872567 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:08:16.872573 | orchestrator | Friday 17 April 2026 06:08:14 +0000 (0:00:00.237) 0:13:16.817 ********** 2026-04-17 06:08:16.872579 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872585 | orchestrator | 2026-04-17 06:08:16.872591 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:08:16.872597 | orchestrator | Friday 17 April 2026 06:08:14 +0000 (0:00:00.140) 0:13:16.957 ********** 2026-04-17 06:08:16.872603 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872609 | orchestrator | 2026-04-17 06:08:16.872615 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:08:16.872621 | orchestrator | Friday 17 April 2026 06:08:14 +0000 (0:00:00.253) 0:13:17.211 ********** 2026-04-17 06:08:16.872628 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872634 | orchestrator | 2026-04-17 06:08:16.872640 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:08:16.872650 | orchestrator | Friday 17 April 2026 06:08:14 +0000 (0:00:00.153) 0:13:17.364 ********** 2026-04-17 06:08:16.872656 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872663 | orchestrator | 2026-04-17 06:08:16.872669 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:08:16.872676 | orchestrator | Friday 17 April 2026 06:08:14 +0000 (0:00:00.132) 0:13:17.497 ********** 2026-04-17 06:08:16.872682 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872689 | orchestrator | 2026-04-17 06:08:16.872695 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:08:16.872701 | orchestrator | Friday 17 April 2026 06:08:14 +0000 (0:00:00.151) 0:13:17.648 ********** 2026-04-17 06:08:16.872707 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872713 | orchestrator | 2026-04-17 06:08:16.872719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:08:16.872726 | orchestrator | Friday 17 April 2026 06:08:15 +0000 (0:00:00.147) 0:13:17.796 ********** 2026-04-17 06:08:16.872732 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872738 | orchestrator | 2026-04-17 06:08:16.872744 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:08:16.872750 | orchestrator | Friday 17 April 2026 06:08:15 +0000 (0:00:00.140) 0:13:17.937 ********** 2026-04-17 06:08:16.872756 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872762 | orchestrator | 2026-04-17 06:08:16.872769 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:08:16.872775 | orchestrator | Friday 17 April 2026 06:08:15 +0000 (0:00:00.165) 0:13:18.102 ********** 2026-04-17 06:08:16.872781 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:08:16.872787 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:08:16.872793 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:08:16.872799 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:16.872805 | orchestrator | 2026-04-17 06:08:16.872811 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:08:16.872818 | orchestrator | Friday 17 April 2026 06:08:16 +0000 (0:00:01.267) 0:13:19.370 ********** 2026-04-17 06:08:16.872824 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:08:16.872834 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:08:47.836465 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:08:47.836601 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.836630 | orchestrator | 2026-04-17 06:08:47.836688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:08:47.836703 | orchestrator | Friday 17 April 2026 06:08:17 +0000 (0:00:00.462) 0:13:19.833 ********** 2026-04-17 06:08:47.836715 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 06:08:47.836753 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 06:08:47.836765 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 06:08:47.836776 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.836786 | orchestrator | 2026-04-17 06:08:47.836798 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:08:47.836809 | orchestrator | Friday 17 April 2026 06:08:17 +0000 (0:00:00.494) 0:13:20.327 ********** 2026-04-17 06:08:47.836820 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.836831 | orchestrator | 2026-04-17 06:08:47.836842 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:08:47.836854 | orchestrator | Friday 17 April 2026 06:08:17 +0000 (0:00:00.162) 0:13:20.490 ********** 2026-04-17 06:08:47.836865 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-17 06:08:47.836876 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.836889 | orchestrator | 2026-04-17 06:08:47.836908 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:08:47.836928 | orchestrator | Friday 17 April 2026 06:08:18 +0000 (0:00:00.381) 0:13:20.872 ********** 2026-04-17 06:08:47.836946 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.836965 | orchestrator | 2026-04-17 06:08:47.836985 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:08:47.837004 | orchestrator | Friday 17 April 2026 06:08:18 +0000 (0:00:00.845) 0:13:21.717 ********** 2026-04-17 06:08:47.837022 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:08:47.837037 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:08:47.837050 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 06:08:47.837062 | orchestrator | 2026-04-17 06:08:47.837074 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-17 06:08:47.837087 | orchestrator | Friday 17 April 2026 06:08:20 +0000 (0:00:01.114) 0:13:22.831 ********** 2026-04-17 06:08:47.837099 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-04-17 06:08:47.837112 | orchestrator | 2026-04-17 06:08:47.837124 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-17 06:08:47.837136 | orchestrator | Friday 17 April 2026 06:08:20 +0000 (0:00:00.254) 0:13:23.086 ********** 2026-04-17 06:08:47.837149 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.837162 | orchestrator | 2026-04-17 06:08:47.837174 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-17 06:08:47.837186 | orchestrator | Friday 17 April 2026 06:08:20 +0000 (0:00:00.553) 0:13:23.640 ********** 2026-04-17 06:08:47.837198 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.837210 | orchestrator | 2026-04-17 06:08:47.837223 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-17 06:08:47.837234 | orchestrator | Friday 17 April 2026 06:08:21 +0000 (0:00:00.153) 0:13:23.794 ********** 2026-04-17 06:08:47.837271 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:08:47.837302 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:08:47.837313 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:08:47.837324 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-04-17 06:08:47.837334 | orchestrator | 2026-04-17 06:08:47.837347 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-17 06:08:47.837365 | orchestrator | Friday 17 April 2026 06:08:27 +0000 (0:00:06.724) 0:13:30.519 ********** 2026-04-17 06:08:47.837383 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.837401 | orchestrator | 2026-04-17 06:08:47.837420 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-17 06:08:47.837436 | orchestrator | Friday 17 April 2026 06:08:28 +0000 (0:00:01.035) 0:13:31.554 ********** 2026-04-17 06:08:47.837467 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 06:08:47.837486 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-17 06:08:47.837506 | orchestrator | 2026-04-17 06:08:47.837525 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:08:47.837544 | orchestrator | Friday 17 April 2026 06:08:31 +0000 (0:00:02.252) 0:13:33.807 ********** 2026-04-17 06:08:47.837562 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 06:08:47.837575 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-17 06:08:47.837585 | orchestrator | 2026-04-17 06:08:47.837597 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-17 06:08:47.837615 | orchestrator | Friday 17 April 2026 06:08:32 +0000 (0:00:01.137) 0:13:34.944 ********** 2026-04-17 06:08:47.837632 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.837643 | orchestrator | 2026-04-17 06:08:47.837654 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-17 06:08:47.837664 | orchestrator | Friday 17 April 2026 06:08:32 +0000 (0:00:00.482) 0:13:35.426 ********** 2026-04-17 06:08:47.837675 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.837686 | orchestrator | 2026-04-17 06:08:47.837697 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-17 06:08:47.837707 | orchestrator | Friday 17 April 2026 06:08:32 +0000 (0:00:00.171) 0:13:35.598 ********** 2026-04-17 06:08:47.837734 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.837746 | orchestrator | 2026-04-17 06:08:47.837768 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-17 06:08:47.837799 | orchestrator | Friday 17 April 2026 06:08:32 +0000 (0:00:00.140) 0:13:35.738 ********** 2026-04-17 06:08:47.837811 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-04-17 06:08:47.837822 | orchestrator | 2026-04-17 06:08:47.837832 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-17 06:08:47.837843 | orchestrator | Friday 17 April 2026 06:08:33 +0000 (0:00:00.223) 0:13:35.961 ********** 2026-04-17 06:08:47.837854 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.837865 | orchestrator | 2026-04-17 06:08:47.837876 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-17 06:08:47.837887 | orchestrator | Friday 17 April 2026 06:08:33 +0000 (0:00:00.158) 0:13:36.120 ********** 2026-04-17 06:08:47.837898 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.837909 | orchestrator | 2026-04-17 06:08:47.837920 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-17 06:08:47.837931 | orchestrator | Friday 17 April 2026 06:08:33 +0000 (0:00:00.156) 0:13:36.277 ********** 2026-04-17 06:08:47.837942 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-04-17 06:08:47.837952 | orchestrator | 2026-04-17 06:08:47.837963 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-17 06:08:47.837974 | orchestrator | Friday 17 April 2026 06:08:33 +0000 (0:00:00.208) 0:13:36.485 ********** 2026-04-17 06:08:47.837985 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.837996 | orchestrator | 2026-04-17 06:08:47.838007 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-17 06:08:47.838090 | orchestrator | Friday 17 April 2026 06:08:34 +0000 (0:00:01.047) 0:13:37.533 ********** 2026-04-17 06:08:47.838111 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.838129 | orchestrator | 2026-04-17 06:08:47.838141 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-17 06:08:47.838151 | orchestrator | Friday 17 April 2026 06:08:36 +0000 (0:00:01.317) 0:13:38.850 ********** 2026-04-17 06:08:47.838162 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.838172 | orchestrator | 2026-04-17 06:08:47.838183 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-17 06:08:47.838193 | orchestrator | Friday 17 April 2026 06:08:37 +0000 (0:00:01.464) 0:13:40.315 ********** 2026-04-17 06:08:47.838215 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:08:47.838227 | orchestrator | 2026-04-17 06:08:47.838281 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-17 06:08:47.838301 | orchestrator | Friday 17 April 2026 06:08:40 +0000 (0:00:02.931) 0:13:43.246 ********** 2026-04-17 06:08:47.838320 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-17 06:08:47.838339 | orchestrator | 2026-04-17 06:08:47.838358 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-17 06:08:47.838376 | orchestrator | Friday 17 April 2026 06:08:41 +0000 (0:00:00.637) 0:13:43.884 ********** 2026-04-17 06:08:47.838394 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:08:47.838408 | orchestrator | 2026-04-17 06:08:47.838419 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-17 06:08:47.838430 | orchestrator | Friday 17 April 2026 06:08:42 +0000 (0:00:01.343) 0:13:45.227 ********** 2026-04-17 06:08:47.838440 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:08:47.838451 | orchestrator | 2026-04-17 06:08:47.838462 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-17 06:08:47.838472 | orchestrator | Friday 17 April 2026 06:08:43 +0000 (0:00:01.309) 0:13:46.537 ********** 2026-04-17 06:08:47.838492 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.838503 | orchestrator | 2026-04-17 06:08:47.838513 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-17 06:08:47.838524 | orchestrator | Friday 17 April 2026 06:08:44 +0000 (0:00:00.362) 0:13:46.899 ********** 2026-04-17 06:08:47.838535 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:08:47.838545 | orchestrator | 2026-04-17 06:08:47.838556 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-17 06:08:47.838567 | orchestrator | Friday 17 April 2026 06:08:44 +0000 (0:00:00.159) 0:13:47.059 ********** 2026-04-17 06:08:47.838577 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-17 06:08:47.838588 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-17 06:08:47.838599 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.838610 | orchestrator | 2026-04-17 06:08:47.838620 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-17 06:08:47.838631 | orchestrator | Friday 17 April 2026 06:08:44 +0000 (0:00:00.367) 0:13:47.427 ********** 2026-04-17 06:08:47.838642 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-17 06:08:47.838652 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-17 06:08:47.838663 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-17 06:08:47.838674 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-17 06:08:47.838684 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:08:47.838695 | orchestrator | 2026-04-17 06:08:47.838705 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-04-17 06:08:47.838716 | orchestrator | 2026-04-17 06:08:47.838727 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:08:47.838737 | orchestrator | Friday 17 April 2026 06:08:46 +0000 (0:00:01.922) 0:13:49.349 ********** 2026-04-17 06:08:47.838748 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:08:47.838759 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:08:47.838770 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:08:47.838780 | orchestrator | 2026-04-17 06:08:47.838791 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:08:47.838802 | orchestrator | Friday 17 April 2026 06:08:47 +0000 (0:00:00.675) 0:13:50.025 ********** 2026-04-17 06:08:47.838812 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:08:47.838823 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:08:47.838834 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:08:47.838845 | orchestrator | 2026-04-17 06:08:47.838866 | orchestrator | TASK [Get pool list] *********************************************************** 2026-04-17 06:08:52.187937 | orchestrator | Friday 17 April 2026 06:08:47 +0000 (0:00:00.548) 0:13:50.574 ********** 2026-04-17 06:08:52.188056 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:08:52.188070 | orchestrator | 2026-04-17 06:08:52.188081 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-04-17 06:08:52.188091 | orchestrator | Friday 17 April 2026 06:08:49 +0000 (0:00:01.776) 0:13:52.350 ********** 2026-04-17 06:08:52.188101 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:08:52.188110 | orchestrator | 2026-04-17 06:08:52.188119 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-04-17 06:08:52.188129 | orchestrator | Friday 17 April 2026 06:08:51 +0000 (0:00:01.864) 0:13:54.215 ********** 2026-04-17 06:08:52.188158 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-04-17T03:55:11.832815+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.188193 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-04-17T03:56:23.893593+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '35', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.188218 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-04-17T03:56:27.144034+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '82', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.188237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-04-17T03:57:24.596391+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '80', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.570402 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-04-17T03:57:30.750293+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '80', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.570527 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-04-17T03:57:36.965898+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '80', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.570584 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-04-17T03:57:43.108277+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '178', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.570602 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-04-17T03:57:48.232283+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '80', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '78', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.570627 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-04-17T03:57:59.947763+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '80', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '78', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.880313 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-04-17T03:58:45.134373+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '96', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 96, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.880480 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-04-17T03:58:54.152872+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '106', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 106, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.880515 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-04-17T03:59:03.573804+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '188', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 188, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:08:52.880558 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-04-17T03:59:12.781853+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '124', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 124, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:10:16.703823 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-04-17T03:59:21.695728+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '133', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 133, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-17 06:10:16.703957 | orchestrator | 2026-04-17 06:10:16.703974 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-04-17 06:10:16.703986 | orchestrator | Friday 17 April 2026 06:08:53 +0000 (0:00:02.100) 0:13:56.316 ********** 2026-04-17 06:10:16.703996 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:10:16.704005 | orchestrator | 2026-04-17 06:10:16.704015 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-04-17 06:10:16.704024 | orchestrator | Friday 17 April 2026 06:08:55 +0000 (0:00:01.726) 0:13:58.042 ********** 2026-04-17 06:10:16.704034 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-17 06:10:16.704045 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-17 06:10:16.704055 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-17 06:10:16.704064 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-17 06:10:16.704075 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-17 06:10:16.704085 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-17 06:10:16.704094 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-17 06:10:16.704103 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-17 06:10:16.704113 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-17 06:10:16.704122 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-17 06:10:16.704131 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-17 06:10:16.704140 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-17 06:10:16.704150 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-17 06:10:16.704159 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-17 06:10:16.704168 | orchestrator | 2026-04-17 06:10:16.704178 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-04-17 06:10:16.704203 | orchestrator | Friday 17 April 2026 06:10:05 +0000 (0:01:09.741) 0:15:07.784 ********** 2026-04-17 06:10:16.704213 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-17 06:10:16.704223 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-17 06:10:16.704232 | orchestrator | 2026-04-17 06:10:16.704248 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-17 06:10:16.704257 | orchestrator | 2026-04-17 06:10:16.704267 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:10:16.704283 | orchestrator | Friday 17 April 2026 06:10:10 +0000 (0:00:05.354) 0:15:13.139 ********** 2026-04-17 06:10:16.704293 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-17 06:10:16.704302 | orchestrator | 2026-04-17 06:10:16.704312 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:10:16.704321 | orchestrator | Friday 17 April 2026 06:10:10 +0000 (0:00:00.299) 0:15:13.438 ********** 2026-04-17 06:10:16.704331 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:16.704340 | orchestrator | 2026-04-17 06:10:16.704350 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:10:16.704359 | orchestrator | Friday 17 April 2026 06:10:11 +0000 (0:00:00.452) 0:15:13.891 ********** 2026-04-17 06:10:16.704368 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:16.704378 | orchestrator | 2026-04-17 06:10:16.704387 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:10:16.704396 | orchestrator | Friday 17 April 2026 06:10:11 +0000 (0:00:00.145) 0:15:14.037 ********** 2026-04-17 06:10:16.704406 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:16.704415 | orchestrator | 2026-04-17 06:10:16.704425 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:10:16.704434 | orchestrator | Friday 17 April 2026 06:10:11 +0000 (0:00:00.455) 0:15:14.492 ********** 2026-04-17 06:10:16.704443 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:16.704453 | orchestrator | 2026-04-17 06:10:16.704462 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:10:16.704472 | orchestrator | Friday 17 April 2026 06:10:11 +0000 (0:00:00.162) 0:15:14.655 ********** 2026-04-17 06:10:16.704481 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:16.704490 | orchestrator | 2026-04-17 06:10:16.704500 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:10:16.704510 | orchestrator | Friday 17 April 2026 06:10:12 +0000 (0:00:00.189) 0:15:14.844 ********** 2026-04-17 06:10:16.704519 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:16.704529 | orchestrator | 2026-04-17 06:10:16.704538 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:10:16.704547 | orchestrator | Friday 17 April 2026 06:10:12 +0000 (0:00:00.501) 0:15:15.346 ********** 2026-04-17 06:10:16.704557 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:16.704589 | orchestrator | 2026-04-17 06:10:16.704599 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:10:16.704608 | orchestrator | Friday 17 April 2026 06:10:12 +0000 (0:00:00.153) 0:15:15.499 ********** 2026-04-17 06:10:16.704618 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:16.704627 | orchestrator | 2026-04-17 06:10:16.704636 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:10:16.704646 | orchestrator | Friday 17 April 2026 06:10:12 +0000 (0:00:00.142) 0:15:15.641 ********** 2026-04-17 06:10:16.704655 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:10:16.704664 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:10:16.704674 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:10:16.704683 | orchestrator | 2026-04-17 06:10:16.704692 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:10:16.704702 | orchestrator | Friday 17 April 2026 06:10:13 +0000 (0:00:00.763) 0:15:16.404 ********** 2026-04-17 06:10:16.704711 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:16.704720 | orchestrator | 2026-04-17 06:10:16.704729 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:10:16.704739 | orchestrator | Friday 17 April 2026 06:10:13 +0000 (0:00:00.274) 0:15:16.679 ********** 2026-04-17 06:10:16.704748 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:10:16.704757 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:10:16.704773 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:10:16.704782 | orchestrator | 2026-04-17 06:10:16.704792 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:10:16.704801 | orchestrator | Friday 17 April 2026 06:10:15 +0000 (0:00:01.931) 0:15:18.610 ********** 2026-04-17 06:10:16.704810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 06:10:16.704820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 06:10:16.704829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 06:10:16.704839 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:16.704848 | orchestrator | 2026-04-17 06:10:16.704858 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:10:16.704867 | orchestrator | Friday 17 April 2026 06:10:16 +0000 (0:00:00.423) 0:15:19.033 ********** 2026-04-17 06:10:16.704878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:10:16.704896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:10:21.652093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:10:21.652195 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.652211 | orchestrator | 2026-04-17 06:10:21.652223 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:10:21.652235 | orchestrator | Friday 17 April 2026 06:10:16 +0000 (0:00:00.675) 0:15:19.709 ********** 2026-04-17 06:10:21.652249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:21.652262 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:21.652274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:21.652284 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.652295 | orchestrator | 2026-04-17 06:10:21.652306 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:10:21.652317 | orchestrator | Friday 17 April 2026 06:10:17 +0000 (0:00:00.187) 0:15:19.897 ********** 2026-04-17 06:10:21.652330 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:10:14.496633', 'end': '2026-04-17 06:10:14.535777', 'delta': '0:00:00.039144', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:10:21.652365 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:10:15.055394', 'end': '2026-04-17 06:10:15.095243', 'delta': '0:00:00.039849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:10:21.652394 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:10:15.665114', 'end': '2026-04-17 06:10:15.705675', 'delta': '0:00:00.040561', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:10:21.652406 | orchestrator | 2026-04-17 06:10:21.652423 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:10:21.652435 | orchestrator | Friday 17 April 2026 06:10:17 +0000 (0:00:00.221) 0:15:20.118 ********** 2026-04-17 06:10:21.652446 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:21.652457 | orchestrator | 2026-04-17 06:10:21.652467 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:10:21.652478 | orchestrator | Friday 17 April 2026 06:10:17 +0000 (0:00:00.298) 0:15:20.416 ********** 2026-04-17 06:10:21.652489 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.652499 | orchestrator | 2026-04-17 06:10:21.652510 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:10:21.652521 | orchestrator | Friday 17 April 2026 06:10:17 +0000 (0:00:00.257) 0:15:20.674 ********** 2026-04-17 06:10:21.652531 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:21.652542 | orchestrator | 2026-04-17 06:10:21.652552 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:10:21.652563 | orchestrator | Friday 17 April 2026 06:10:18 +0000 (0:00:00.197) 0:15:20.871 ********** 2026-04-17 06:10:21.652574 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:10:21.652611 | orchestrator | 2026-04-17 06:10:21.652625 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:10:21.652638 | orchestrator | Friday 17 April 2026 06:10:19 +0000 (0:00:01.786) 0:15:22.658 ********** 2026-04-17 06:10:21.652650 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:21.652661 | orchestrator | 2026-04-17 06:10:21.652674 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:10:21.652687 | orchestrator | Friday 17 April 2026 06:10:20 +0000 (0:00:00.152) 0:15:22.810 ********** 2026-04-17 06:10:21.652699 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.652711 | orchestrator | 2026-04-17 06:10:21.652723 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:10:21.652744 | orchestrator | Friday 17 April 2026 06:10:20 +0000 (0:00:00.134) 0:15:22.945 ********** 2026-04-17 06:10:21.652757 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.652769 | orchestrator | 2026-04-17 06:10:21.652781 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:10:21.652794 | orchestrator | Friday 17 April 2026 06:10:20 +0000 (0:00:00.252) 0:15:23.197 ********** 2026-04-17 06:10:21.652806 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.652818 | orchestrator | 2026-04-17 06:10:21.652830 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:10:21.652843 | orchestrator | Friday 17 April 2026 06:10:20 +0000 (0:00:00.130) 0:15:23.328 ********** 2026-04-17 06:10:21.652855 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.652867 | orchestrator | 2026-04-17 06:10:21.652879 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:10:21.652892 | orchestrator | Friday 17 April 2026 06:10:20 +0000 (0:00:00.133) 0:15:23.462 ********** 2026-04-17 06:10:21.652904 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:21.652917 | orchestrator | 2026-04-17 06:10:21.652929 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:10:21.652941 | orchestrator | Friday 17 April 2026 06:10:20 +0000 (0:00:00.195) 0:15:23.658 ********** 2026-04-17 06:10:21.652954 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.652966 | orchestrator | 2026-04-17 06:10:21.652977 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:10:21.652988 | orchestrator | Friday 17 April 2026 06:10:21 +0000 (0:00:00.132) 0:15:23.790 ********** 2026-04-17 06:10:21.652998 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:21.653009 | orchestrator | 2026-04-17 06:10:21.653019 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:10:21.653030 | orchestrator | Friday 17 April 2026 06:10:21 +0000 (0:00:00.189) 0:15:23.980 ********** 2026-04-17 06:10:21.653041 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:21.653051 | orchestrator | 2026-04-17 06:10:21.653062 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:10:21.653074 | orchestrator | Friday 17 April 2026 06:10:21 +0000 (0:00:00.137) 0:15:24.117 ********** 2026-04-17 06:10:21.653085 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:21.653095 | orchestrator | 2026-04-17 06:10:21.653106 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:10:21.653117 | orchestrator | Friday 17 April 2026 06:10:21 +0000 (0:00:00.175) 0:15:24.292 ********** 2026-04-17 06:10:21.653128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:10:21.653153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'uuids': ['7b3e98f1-7f68-4c04-9bb1-a0fd9b3252da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p']}})  2026-04-17 06:10:21.773011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c054ea69', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:10:21.773153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2']}})  2026-04-17 06:10:21.773171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:10:21.773986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:10:21.774009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:10:21.774068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:10:21.774080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px', 'dm-uuid-CRYPT-LUKS2-0eb8d7ab97d34aa3a4f06ee9564e4391-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:10:21.774111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:10:21.774139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'uuids': ['0eb8d7ab-97d3-4aa3-a4f0-6ee9564e4391'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px']}})  2026-04-17 06:10:21.774152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08']}})  2026-04-17 06:10:21.774206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:10:21.774238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fc59f804', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:10:22.115476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:10:22.115548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:10:22.115558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p', 'dm-uuid-CRYPT-LUKS2-7b3e98f17f684c049bb1a0fd9b3252da-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:10:22.115567 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:22.115573 | orchestrator | 2026-04-17 06:10:22.115577 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:10:22.115612 | orchestrator | Friday 17 April 2026 06:10:21 +0000 (0:00:00.349) 0:15:24.642 ********** 2026-04-17 06:10:22.115618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:22.115624 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'uuids': ['7b3e98f1-7f68-4c04-9bb1-a0fd9b3252da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:22.115641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c054ea69', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:22.115671 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:22.115677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:22.115682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:22.115686 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:22.115691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:22.115705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px', 'dm-uuid-CRYPT-LUKS2-0eb8d7ab97d34aa3a4f06ee9564e4391-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'uuids': ['0eb8d7ab-97d3-4aa3-a4f0-6ee9564e4391'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868529 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fc59f804', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868566 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p', 'dm-uuid-CRYPT-LUKS2-7b3e98f17f684c049bb1a0fd9b3252da-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:10:23.868584 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:23.868659 | orchestrator | 2026-04-17 06:10:23.868670 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:10:23.868681 | orchestrator | Friday 17 April 2026 06:10:22 +0000 (0:00:00.427) 0:15:25.069 ********** 2026-04-17 06:10:23.868691 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:23.868702 | orchestrator | 2026-04-17 06:10:23.868712 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:10:23.868721 | orchestrator | Friday 17 April 2026 06:10:23 +0000 (0:00:00.902) 0:15:25.972 ********** 2026-04-17 06:10:23.868731 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:23.868740 | orchestrator | 2026-04-17 06:10:23.868755 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:10:23.868765 | orchestrator | Friday 17 April 2026 06:10:23 +0000 (0:00:00.158) 0:15:26.130 ********** 2026-04-17 06:10:23.868775 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:23.868784 | orchestrator | 2026-04-17 06:10:23.868794 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:10:23.868810 | orchestrator | Friday 17 April 2026 06:10:23 +0000 (0:00:00.478) 0:15:26.609 ********** 2026-04-17 06:10:38.697623 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.697799 | orchestrator | 2026-04-17 06:10:38.697816 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:10:38.697830 | orchestrator | Friday 17 April 2026 06:10:24 +0000 (0:00:00.162) 0:15:26.772 ********** 2026-04-17 06:10:38.697841 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.697852 | orchestrator | 2026-04-17 06:10:38.697863 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:10:38.697874 | orchestrator | Friday 17 April 2026 06:10:24 +0000 (0:00:00.259) 0:15:27.031 ********** 2026-04-17 06:10:38.697885 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.697895 | orchestrator | 2026-04-17 06:10:38.697906 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:10:38.697917 | orchestrator | Friday 17 April 2026 06:10:24 +0000 (0:00:00.147) 0:15:27.179 ********** 2026-04-17 06:10:38.697929 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-17 06:10:38.697940 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-17 06:10:38.697951 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-17 06:10:38.697961 | orchestrator | 2026-04-17 06:10:38.697972 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:10:38.697983 | orchestrator | Friday 17 April 2026 06:10:25 +0000 (0:00:00.673) 0:15:27.852 ********** 2026-04-17 06:10:38.697994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 06:10:38.698005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 06:10:38.698075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 06:10:38.698090 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.698101 | orchestrator | 2026-04-17 06:10:38.698112 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:10:38.698123 | orchestrator | Friday 17 April 2026 06:10:25 +0000 (0:00:00.170) 0:15:28.023 ********** 2026-04-17 06:10:38.698133 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-17 06:10:38.698145 | orchestrator | 2026-04-17 06:10:38.698157 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:10:38.698205 | orchestrator | Friday 17 April 2026 06:10:25 +0000 (0:00:00.247) 0:15:28.270 ********** 2026-04-17 06:10:38.698219 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.698253 | orchestrator | 2026-04-17 06:10:38.698266 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:10:38.698278 | orchestrator | Friday 17 April 2026 06:10:25 +0000 (0:00:00.159) 0:15:28.430 ********** 2026-04-17 06:10:38.698291 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.698303 | orchestrator | 2026-04-17 06:10:38.698315 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:10:38.698328 | orchestrator | Friday 17 April 2026 06:10:25 +0000 (0:00:00.177) 0:15:28.607 ********** 2026-04-17 06:10:38.698340 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.698352 | orchestrator | 2026-04-17 06:10:38.698364 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:10:38.698376 | orchestrator | Friday 17 April 2026 06:10:26 +0000 (0:00:00.152) 0:15:28.759 ********** 2026-04-17 06:10:38.698389 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:38.698401 | orchestrator | 2026-04-17 06:10:38.698414 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:10:38.698426 | orchestrator | Friday 17 April 2026 06:10:26 +0000 (0:00:00.265) 0:15:29.025 ********** 2026-04-17 06:10:38.698436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:10:38.698447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:10:38.698458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:10:38.698469 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.698479 | orchestrator | 2026-04-17 06:10:38.698490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:10:38.698501 | orchestrator | Friday 17 April 2026 06:10:27 +0000 (0:00:01.177) 0:15:30.203 ********** 2026-04-17 06:10:38.698511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:10:38.698522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:10:38.698532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:10:38.698543 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.698554 | orchestrator | 2026-04-17 06:10:38.698564 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:10:38.698575 | orchestrator | Friday 17 April 2026 06:10:27 +0000 (0:00:00.444) 0:15:30.647 ********** 2026-04-17 06:10:38.698585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:10:38.698596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:10:38.698607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:10:38.698617 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.698628 | orchestrator | 2026-04-17 06:10:38.698658 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:10:38.698669 | orchestrator | Friday 17 April 2026 06:10:28 +0000 (0:00:00.446) 0:15:31.094 ********** 2026-04-17 06:10:38.698680 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:38.698691 | orchestrator | 2026-04-17 06:10:38.698701 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:10:38.698726 | orchestrator | Friday 17 April 2026 06:10:28 +0000 (0:00:00.186) 0:15:31.280 ********** 2026-04-17 06:10:38.698737 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 06:10:38.698748 | orchestrator | 2026-04-17 06:10:38.698759 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:10:38.698770 | orchestrator | Friday 17 April 2026 06:10:28 +0000 (0:00:00.368) 0:15:31.648 ********** 2026-04-17 06:10:38.698799 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:10:38.698811 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:10:38.698821 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:10:38.698832 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 06:10:38.698850 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:10:38.698861 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:10:38.698872 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:10:38.698882 | orchestrator | 2026-04-17 06:10:38.698893 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:10:38.698903 | orchestrator | Friday 17 April 2026 06:10:29 +0000 (0:00:00.878) 0:15:32.527 ********** 2026-04-17 06:10:38.698914 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:10:38.698924 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:10:38.698935 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:10:38.698946 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 06:10:38.698956 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:10:38.698967 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:10:38.698978 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:10:38.698988 | orchestrator | 2026-04-17 06:10:38.698999 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-17 06:10:38.699009 | orchestrator | Friday 17 April 2026 06:10:31 +0000 (0:00:01.797) 0:15:34.324 ********** 2026-04-17 06:10:38.699020 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:38.699030 | orchestrator | 2026-04-17 06:10:38.699041 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-17 06:10:38.699051 | orchestrator | Friday 17 April 2026 06:10:32 +0000 (0:00:00.490) 0:15:34.815 ********** 2026-04-17 06:10:38.699062 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:38.699072 | orchestrator | 2026-04-17 06:10:38.699083 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-17 06:10:38.699094 | orchestrator | Friday 17 April 2026 06:10:32 +0000 (0:00:00.170) 0:15:34.986 ********** 2026-04-17 06:10:38.699104 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:38.699115 | orchestrator | 2026-04-17 06:10:38.699125 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-17 06:10:38.699135 | orchestrator | Friday 17 April 2026 06:10:32 +0000 (0:00:00.250) 0:15:35.237 ********** 2026-04-17 06:10:38.699146 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-17 06:10:38.699157 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-17 06:10:38.699167 | orchestrator | 2026-04-17 06:10:38.699178 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:10:38.699188 | orchestrator | Friday 17 April 2026 06:10:35 +0000 (0:00:03.105) 0:15:38.343 ********** 2026-04-17 06:10:38.699199 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-17 06:10:38.699210 | orchestrator | 2026-04-17 06:10:38.699220 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:10:38.699230 | orchestrator | Friday 17 April 2026 06:10:36 +0000 (0:00:00.591) 0:15:38.934 ********** 2026-04-17 06:10:38.699241 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-17 06:10:38.699252 | orchestrator | 2026-04-17 06:10:38.699262 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:10:38.699273 | orchestrator | Friday 17 April 2026 06:10:36 +0000 (0:00:00.257) 0:15:39.192 ********** 2026-04-17 06:10:38.699283 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.699294 | orchestrator | 2026-04-17 06:10:38.699304 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:10:38.699315 | orchestrator | Friday 17 April 2026 06:10:36 +0000 (0:00:00.135) 0:15:39.327 ********** 2026-04-17 06:10:38.699334 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:38.699345 | orchestrator | 2026-04-17 06:10:38.699355 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:10:38.699366 | orchestrator | Friday 17 April 2026 06:10:37 +0000 (0:00:00.563) 0:15:39.891 ********** 2026-04-17 06:10:38.699376 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:38.699387 | orchestrator | 2026-04-17 06:10:38.699397 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:10:38.699408 | orchestrator | Friday 17 April 2026 06:10:37 +0000 (0:00:00.533) 0:15:40.424 ********** 2026-04-17 06:10:38.699418 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:38.699429 | orchestrator | 2026-04-17 06:10:38.699439 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:10:38.699450 | orchestrator | Friday 17 April 2026 06:10:38 +0000 (0:00:00.568) 0:15:40.993 ********** 2026-04-17 06:10:38.699460 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.699471 | orchestrator | 2026-04-17 06:10:38.699481 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:10:38.699498 | orchestrator | Friday 17 April 2026 06:10:38 +0000 (0:00:00.159) 0:15:41.153 ********** 2026-04-17 06:10:38.699508 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.699519 | orchestrator | 2026-04-17 06:10:38.699530 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:10:38.699540 | orchestrator | Friday 17 April 2026 06:10:38 +0000 (0:00:00.121) 0:15:41.274 ********** 2026-04-17 06:10:38.699551 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:38.699562 | orchestrator | 2026-04-17 06:10:38.699579 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:10:50.221496 | orchestrator | Friday 17 April 2026 06:10:38 +0000 (0:00:00.156) 0:15:41.431 ********** 2026-04-17 06:10:50.221612 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.221630 | orchestrator | 2026-04-17 06:10:50.221643 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:10:50.221655 | orchestrator | Friday 17 April 2026 06:10:39 +0000 (0:00:00.531) 0:15:41.963 ********** 2026-04-17 06:10:50.221666 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.221766 | orchestrator | 2026-04-17 06:10:50.221778 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:10:50.221789 | orchestrator | Friday 17 April 2026 06:10:39 +0000 (0:00:00.529) 0:15:42.492 ********** 2026-04-17 06:10:50.221800 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.221827 | orchestrator | 2026-04-17 06:10:50.221850 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:10:50.221863 | orchestrator | Friday 17 April 2026 06:10:39 +0000 (0:00:00.157) 0:15:42.649 ********** 2026-04-17 06:10:50.221874 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.221885 | orchestrator | 2026-04-17 06:10:50.221896 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:10:50.221907 | orchestrator | Friday 17 April 2026 06:10:40 +0000 (0:00:00.525) 0:15:43.174 ********** 2026-04-17 06:10:50.221917 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.221928 | orchestrator | 2026-04-17 06:10:50.221939 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:10:50.221950 | orchestrator | Friday 17 April 2026 06:10:40 +0000 (0:00:00.153) 0:15:43.328 ********** 2026-04-17 06:10:50.221961 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.221972 | orchestrator | 2026-04-17 06:10:50.221983 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:10:50.221994 | orchestrator | Friday 17 April 2026 06:10:40 +0000 (0:00:00.177) 0:15:43.505 ********** 2026-04-17 06:10:50.222005 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.222066 | orchestrator | 2026-04-17 06:10:50.222082 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:10:50.222093 | orchestrator | Friday 17 April 2026 06:10:40 +0000 (0:00:00.144) 0:15:43.650 ********** 2026-04-17 06:10:50.222105 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222143 | orchestrator | 2026-04-17 06:10:50.222156 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:10:50.222168 | orchestrator | Friday 17 April 2026 06:10:41 +0000 (0:00:00.146) 0:15:43.796 ********** 2026-04-17 06:10:50.222180 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222192 | orchestrator | 2026-04-17 06:10:50.222203 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:10:50.222216 | orchestrator | Friday 17 April 2026 06:10:41 +0000 (0:00:00.135) 0:15:43.932 ********** 2026-04-17 06:10:50.222229 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222241 | orchestrator | 2026-04-17 06:10:50.222253 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:10:50.222265 | orchestrator | Friday 17 April 2026 06:10:41 +0000 (0:00:00.132) 0:15:44.064 ********** 2026-04-17 06:10:50.222278 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.222289 | orchestrator | 2026-04-17 06:10:50.222302 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:10:50.222313 | orchestrator | Friday 17 April 2026 06:10:41 +0000 (0:00:00.172) 0:15:44.236 ********** 2026-04-17 06:10:50.222326 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.222337 | orchestrator | 2026-04-17 06:10:50.222350 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:10:50.222362 | orchestrator | Friday 17 April 2026 06:10:41 +0000 (0:00:00.238) 0:15:44.474 ********** 2026-04-17 06:10:50.222374 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222387 | orchestrator | 2026-04-17 06:10:50.222398 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:10:50.222409 | orchestrator | Friday 17 April 2026 06:10:41 +0000 (0:00:00.125) 0:15:44.600 ********** 2026-04-17 06:10:50.222419 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222430 | orchestrator | 2026-04-17 06:10:50.222441 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:10:50.222452 | orchestrator | Friday 17 April 2026 06:10:41 +0000 (0:00:00.142) 0:15:44.742 ********** 2026-04-17 06:10:50.222462 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222473 | orchestrator | 2026-04-17 06:10:50.222484 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:10:50.222494 | orchestrator | Friday 17 April 2026 06:10:42 +0000 (0:00:00.134) 0:15:44.877 ********** 2026-04-17 06:10:50.222505 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222516 | orchestrator | 2026-04-17 06:10:50.222526 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:10:50.222537 | orchestrator | Friday 17 April 2026 06:10:42 +0000 (0:00:00.527) 0:15:45.404 ********** 2026-04-17 06:10:50.222547 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222558 | orchestrator | 2026-04-17 06:10:50.222569 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:10:50.222579 | orchestrator | Friday 17 April 2026 06:10:42 +0000 (0:00:00.127) 0:15:45.531 ********** 2026-04-17 06:10:50.222590 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222600 | orchestrator | 2026-04-17 06:10:50.222611 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:10:50.222622 | orchestrator | Friday 17 April 2026 06:10:42 +0000 (0:00:00.157) 0:15:45.689 ********** 2026-04-17 06:10:50.222632 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222643 | orchestrator | 2026-04-17 06:10:50.222668 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:10:50.222699 | orchestrator | Friday 17 April 2026 06:10:43 +0000 (0:00:00.153) 0:15:45.843 ********** 2026-04-17 06:10:50.222710 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222721 | orchestrator | 2026-04-17 06:10:50.222732 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:10:50.222742 | orchestrator | Friday 17 April 2026 06:10:43 +0000 (0:00:00.139) 0:15:45.983 ********** 2026-04-17 06:10:50.222781 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222792 | orchestrator | 2026-04-17 06:10:50.222803 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:10:50.222813 | orchestrator | Friday 17 April 2026 06:10:43 +0000 (0:00:00.174) 0:15:46.158 ********** 2026-04-17 06:10:50.222824 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222835 | orchestrator | 2026-04-17 06:10:50.222845 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:10:50.222856 | orchestrator | Friday 17 April 2026 06:10:43 +0000 (0:00:00.132) 0:15:46.290 ********** 2026-04-17 06:10:50.222866 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222877 | orchestrator | 2026-04-17 06:10:50.222887 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:10:50.222898 | orchestrator | Friday 17 April 2026 06:10:43 +0000 (0:00:00.134) 0:15:46.425 ********** 2026-04-17 06:10:50.222909 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.222919 | orchestrator | 2026-04-17 06:10:50.222930 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:10:50.222941 | orchestrator | Friday 17 April 2026 06:10:43 +0000 (0:00:00.241) 0:15:46.667 ********** 2026-04-17 06:10:50.222951 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.222962 | orchestrator | 2026-04-17 06:10:50.222973 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:10:50.222983 | orchestrator | Friday 17 April 2026 06:10:44 +0000 (0:00:00.920) 0:15:47.587 ********** 2026-04-17 06:10:50.222994 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.223004 | orchestrator | 2026-04-17 06:10:50.223015 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:10:50.223025 | orchestrator | Friday 17 April 2026 06:10:46 +0000 (0:00:01.310) 0:15:48.898 ********** 2026-04-17 06:10:50.223036 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-17 06:10:50.223048 | orchestrator | 2026-04-17 06:10:50.223058 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:10:50.223069 | orchestrator | Friday 17 April 2026 06:10:46 +0000 (0:00:00.647) 0:15:49.545 ********** 2026-04-17 06:10:50.223079 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.223090 | orchestrator | 2026-04-17 06:10:50.223101 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:10:50.223111 | orchestrator | Friday 17 April 2026 06:10:46 +0000 (0:00:00.144) 0:15:49.690 ********** 2026-04-17 06:10:50.223122 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.223132 | orchestrator | 2026-04-17 06:10:50.223143 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:10:50.223154 | orchestrator | Friday 17 April 2026 06:10:47 +0000 (0:00:00.180) 0:15:49.870 ********** 2026-04-17 06:10:50.223164 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:10:50.223175 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:10:50.223185 | orchestrator | 2026-04-17 06:10:50.223196 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:10:50.223206 | orchestrator | Friday 17 April 2026 06:10:47 +0000 (0:00:00.801) 0:15:50.671 ********** 2026-04-17 06:10:50.223217 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.223227 | orchestrator | 2026-04-17 06:10:50.223238 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:10:50.223249 | orchestrator | Friday 17 April 2026 06:10:48 +0000 (0:00:00.462) 0:15:51.134 ********** 2026-04-17 06:10:50.223259 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.223270 | orchestrator | 2026-04-17 06:10:50.223280 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:10:50.223291 | orchestrator | Friday 17 April 2026 06:10:48 +0000 (0:00:00.149) 0:15:51.283 ********** 2026-04-17 06:10:50.223301 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.223312 | orchestrator | 2026-04-17 06:10:50.223329 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:10:50.223339 | orchestrator | Friday 17 April 2026 06:10:48 +0000 (0:00:00.152) 0:15:51.436 ********** 2026-04-17 06:10:50.223350 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.223361 | orchestrator | 2026-04-17 06:10:50.223371 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:10:50.223382 | orchestrator | Friday 17 April 2026 06:10:48 +0000 (0:00:00.156) 0:15:51.592 ********** 2026-04-17 06:10:50.223392 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-17 06:10:50.223403 | orchestrator | 2026-04-17 06:10:50.223414 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:10:50.223424 | orchestrator | Friday 17 April 2026 06:10:49 +0000 (0:00:00.249) 0:15:51.841 ********** 2026-04-17 06:10:50.223435 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:10:50.223445 | orchestrator | 2026-04-17 06:10:50.223456 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:10:50.223466 | orchestrator | Friday 17 April 2026 06:10:49 +0000 (0:00:00.723) 0:15:52.565 ********** 2026-04-17 06:10:50.223477 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:10:50.223487 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:10:50.223498 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:10:50.223508 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.223519 | orchestrator | 2026-04-17 06:10:50.223535 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:10:50.223546 | orchestrator | Friday 17 April 2026 06:10:49 +0000 (0:00:00.156) 0:15:52.722 ********** 2026-04-17 06:10:50.223556 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:10:50.223567 | orchestrator | 2026-04-17 06:10:50.223577 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:10:50.223588 | orchestrator | Friday 17 April 2026 06:10:50 +0000 (0:00:00.152) 0:15:52.874 ********** 2026-04-17 06:10:50.223604 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.524831 | orchestrator | 2026-04-17 06:11:08.524936 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:11:08.524949 | orchestrator | Friday 17 April 2026 06:10:50 +0000 (0:00:00.182) 0:15:53.057 ********** 2026-04-17 06:11:08.524958 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.524968 | orchestrator | 2026-04-17 06:11:08.524977 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:11:08.524986 | orchestrator | Friday 17 April 2026 06:10:50 +0000 (0:00:00.556) 0:15:53.613 ********** 2026-04-17 06:11:08.524995 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525004 | orchestrator | 2026-04-17 06:11:08.525026 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:11:08.525043 | orchestrator | Friday 17 April 2026 06:10:51 +0000 (0:00:00.172) 0:15:53.786 ********** 2026-04-17 06:11:08.525052 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525061 | orchestrator | 2026-04-17 06:11:08.525069 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:11:08.525078 | orchestrator | Friday 17 April 2026 06:10:51 +0000 (0:00:00.162) 0:15:53.949 ********** 2026-04-17 06:11:08.525087 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:08.525096 | orchestrator | 2026-04-17 06:11:08.525105 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:11:08.525115 | orchestrator | Friday 17 April 2026 06:10:52 +0000 (0:00:01.538) 0:15:55.487 ********** 2026-04-17 06:11:08.525123 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:08.525132 | orchestrator | 2026-04-17 06:11:08.525140 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:11:08.525149 | orchestrator | Friday 17 April 2026 06:10:52 +0000 (0:00:00.145) 0:15:55.632 ********** 2026-04-17 06:11:08.525178 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-17 06:11:08.525187 | orchestrator | 2026-04-17 06:11:08.525196 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:11:08.525204 | orchestrator | Friday 17 April 2026 06:10:53 +0000 (0:00:00.247) 0:15:55.879 ********** 2026-04-17 06:11:08.525213 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525221 | orchestrator | 2026-04-17 06:11:08.525230 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:11:08.525239 | orchestrator | Friday 17 April 2026 06:10:53 +0000 (0:00:00.167) 0:15:56.047 ********** 2026-04-17 06:11:08.525247 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525256 | orchestrator | 2026-04-17 06:11:08.525265 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:11:08.525274 | orchestrator | Friday 17 April 2026 06:10:53 +0000 (0:00:00.148) 0:15:56.195 ********** 2026-04-17 06:11:08.525282 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525291 | orchestrator | 2026-04-17 06:11:08.525300 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:11:08.525308 | orchestrator | Friday 17 April 2026 06:10:53 +0000 (0:00:00.159) 0:15:56.354 ********** 2026-04-17 06:11:08.525317 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525325 | orchestrator | 2026-04-17 06:11:08.525334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:11:08.525342 | orchestrator | Friday 17 April 2026 06:10:53 +0000 (0:00:00.181) 0:15:56.536 ********** 2026-04-17 06:11:08.525351 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525362 | orchestrator | 2026-04-17 06:11:08.525372 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:11:08.525382 | orchestrator | Friday 17 April 2026 06:10:53 +0000 (0:00:00.154) 0:15:56.691 ********** 2026-04-17 06:11:08.525392 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525402 | orchestrator | 2026-04-17 06:11:08.525413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:11:08.525423 | orchestrator | Friday 17 April 2026 06:10:54 +0000 (0:00:00.181) 0:15:56.873 ********** 2026-04-17 06:11:08.525433 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525443 | orchestrator | 2026-04-17 06:11:08.525453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:11:08.525463 | orchestrator | Friday 17 April 2026 06:10:54 +0000 (0:00:00.570) 0:15:57.443 ********** 2026-04-17 06:11:08.525473 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525483 | orchestrator | 2026-04-17 06:11:08.525493 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:11:08.525503 | orchestrator | Friday 17 April 2026 06:10:54 +0000 (0:00:00.165) 0:15:57.609 ********** 2026-04-17 06:11:08.525513 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:08.525523 | orchestrator | 2026-04-17 06:11:08.525532 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:11:08.525542 | orchestrator | Friday 17 April 2026 06:10:55 +0000 (0:00:00.268) 0:15:57.877 ********** 2026-04-17 06:11:08.525552 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-17 06:11:08.525564 | orchestrator | 2026-04-17 06:11:08.525575 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:11:08.525583 | orchestrator | Friday 17 April 2026 06:10:55 +0000 (0:00:00.218) 0:15:58.096 ********** 2026-04-17 06:11:08.525593 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-17 06:11:08.525602 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-17 06:11:08.525611 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-17 06:11:08.525633 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-17 06:11:08.525642 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-17 06:11:08.525651 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-17 06:11:08.525665 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-17 06:11:08.525674 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:11:08.525683 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:11:08.525707 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:11:08.525716 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:11:08.525738 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:11:08.525747 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:11:08.525756 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:11:08.525764 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-17 06:11:08.525773 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-17 06:11:08.525781 | orchestrator | 2026-04-17 06:11:08.525790 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:11:08.525799 | orchestrator | Friday 17 April 2026 06:11:00 +0000 (0:00:05.496) 0:16:03.593 ********** 2026-04-17 06:11:08.525807 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-17 06:11:08.525816 | orchestrator | 2026-04-17 06:11:08.525825 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 06:11:08.525833 | orchestrator | Friday 17 April 2026 06:11:01 +0000 (0:00:00.611) 0:16:04.204 ********** 2026-04-17 06:11:08.525842 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:11:08.525852 | orchestrator | 2026-04-17 06:11:08.525860 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 06:11:08.525869 | orchestrator | Friday 17 April 2026 06:11:01 +0000 (0:00:00.511) 0:16:04.715 ********** 2026-04-17 06:11:08.525878 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:11:08.525886 | orchestrator | 2026-04-17 06:11:08.525895 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:11:08.525904 | orchestrator | Friday 17 April 2026 06:11:02 +0000 (0:00:00.981) 0:16:05.696 ********** 2026-04-17 06:11:08.525912 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525921 | orchestrator | 2026-04-17 06:11:08.525930 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:11:08.525938 | orchestrator | Friday 17 April 2026 06:11:03 +0000 (0:00:00.168) 0:16:05.865 ********** 2026-04-17 06:11:08.525947 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525956 | orchestrator | 2026-04-17 06:11:08.525964 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:11:08.525973 | orchestrator | Friday 17 April 2026 06:11:03 +0000 (0:00:00.147) 0:16:06.013 ********** 2026-04-17 06:11:08.525981 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.525990 | orchestrator | 2026-04-17 06:11:08.525999 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:11:08.526007 | orchestrator | Friday 17 April 2026 06:11:03 +0000 (0:00:00.487) 0:16:06.501 ********** 2026-04-17 06:11:08.526069 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.526080 | orchestrator | 2026-04-17 06:11:08.526089 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:11:08.526097 | orchestrator | Friday 17 April 2026 06:11:03 +0000 (0:00:00.141) 0:16:06.642 ********** 2026-04-17 06:11:08.526106 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.526115 | orchestrator | 2026-04-17 06:11:08.526123 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:11:08.526132 | orchestrator | Friday 17 April 2026 06:11:04 +0000 (0:00:00.141) 0:16:06.784 ********** 2026-04-17 06:11:08.526141 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.526157 | orchestrator | 2026-04-17 06:11:08.526166 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:11:08.526174 | orchestrator | Friday 17 April 2026 06:11:04 +0000 (0:00:00.136) 0:16:06.920 ********** 2026-04-17 06:11:08.526183 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.526191 | orchestrator | 2026-04-17 06:11:08.526200 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:11:08.526208 | orchestrator | Friday 17 April 2026 06:11:04 +0000 (0:00:00.154) 0:16:07.075 ********** 2026-04-17 06:11:08.526217 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.526225 | orchestrator | 2026-04-17 06:11:08.526234 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:11:08.526242 | orchestrator | Friday 17 April 2026 06:11:04 +0000 (0:00:00.140) 0:16:07.215 ********** 2026-04-17 06:11:08.526251 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.526259 | orchestrator | 2026-04-17 06:11:08.526268 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:11:08.526277 | orchestrator | Friday 17 April 2026 06:11:04 +0000 (0:00:00.142) 0:16:07.357 ********** 2026-04-17 06:11:08.526285 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:08.526293 | orchestrator | 2026-04-17 06:11:08.526302 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:11:08.526310 | orchestrator | Friday 17 April 2026 06:11:04 +0000 (0:00:00.142) 0:16:07.500 ********** 2026-04-17 06:11:08.526319 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:08.526327 | orchestrator | 2026-04-17 06:11:08.526336 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:11:08.526349 | orchestrator | Friday 17 April 2026 06:11:04 +0000 (0:00:00.224) 0:16:07.724 ********** 2026-04-17 06:11:08.526358 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:11:08.526367 | orchestrator | 2026-04-17 06:11:08.526375 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:11:08.526383 | orchestrator | Friday 17 April 2026 06:11:08 +0000 (0:00:03.426) 0:16:11.151 ********** 2026-04-17 06:11:08.526398 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:11:31.901596 | orchestrator | 2026-04-17 06:11:31.901710 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:11:31.901726 | orchestrator | Friday 17 April 2026 06:11:08 +0000 (0:00:00.210) 0:16:11.361 ********** 2026-04-17 06:11:31.901739 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-17 06:11:31.901753 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-17 06:11:31.901765 | orchestrator | 2026-04-17 06:11:31.901775 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:11:31.901785 | orchestrator | Friday 17 April 2026 06:11:15 +0000 (0:00:06.678) 0:16:18.040 ********** 2026-04-17 06:11:31.901794 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.901899 | orchestrator | 2026-04-17 06:11:31.901921 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:11:31.901932 | orchestrator | Friday 17 April 2026 06:11:15 +0000 (0:00:00.138) 0:16:18.179 ********** 2026-04-17 06:11:31.901942 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.901975 | orchestrator | 2026-04-17 06:11:31.901986 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:11:31.901998 | orchestrator | Friday 17 April 2026 06:11:16 +0000 (0:00:00.583) 0:16:18.762 ********** 2026-04-17 06:11:31.902007 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.902073 | orchestrator | 2026-04-17 06:11:31.902084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:11:31.902094 | orchestrator | Friday 17 April 2026 06:11:16 +0000 (0:00:00.184) 0:16:18.946 ********** 2026-04-17 06:11:31.902103 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.902113 | orchestrator | 2026-04-17 06:11:31.902122 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:11:31.902132 | orchestrator | Friday 17 April 2026 06:11:16 +0000 (0:00:00.167) 0:16:19.114 ********** 2026-04-17 06:11:31.902143 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.902154 | orchestrator | 2026-04-17 06:11:31.902165 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:11:31.902176 | orchestrator | Friday 17 April 2026 06:11:16 +0000 (0:00:00.189) 0:16:19.303 ********** 2026-04-17 06:11:31.902187 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:31.902199 | orchestrator | 2026-04-17 06:11:31.902210 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:11:31.902221 | orchestrator | Friday 17 April 2026 06:11:16 +0000 (0:00:00.259) 0:16:19.562 ********** 2026-04-17 06:11:31.902231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:11:31.902244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:11:31.902254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:11:31.902265 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.902276 | orchestrator | 2026-04-17 06:11:31.902287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:11:31.902297 | orchestrator | Friday 17 April 2026 06:11:17 +0000 (0:00:00.438) 0:16:20.001 ********** 2026-04-17 06:11:31.902308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:11:31.902319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:11:31.902330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:11:31.902342 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.902352 | orchestrator | 2026-04-17 06:11:31.902363 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:11:31.902374 | orchestrator | Friday 17 April 2026 06:11:17 +0000 (0:00:00.421) 0:16:20.422 ********** 2026-04-17 06:11:31.902384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:11:31.902395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:11:31.902405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:11:31.902416 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.902427 | orchestrator | 2026-04-17 06:11:31.902438 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:11:31.902450 | orchestrator | Friday 17 April 2026 06:11:18 +0000 (0:00:00.481) 0:16:20.904 ********** 2026-04-17 06:11:31.902461 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:31.902472 | orchestrator | 2026-04-17 06:11:31.902483 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:11:31.902495 | orchestrator | Friday 17 April 2026 06:11:18 +0000 (0:00:00.174) 0:16:21.079 ********** 2026-04-17 06:11:31.902505 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 06:11:31.902515 | orchestrator | 2026-04-17 06:11:31.902538 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:11:31.902549 | orchestrator | Friday 17 April 2026 06:11:18 +0000 (0:00:00.464) 0:16:21.543 ********** 2026-04-17 06:11:31.902558 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:11:31.902568 | orchestrator | 2026-04-17 06:11:31.902585 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-17 06:11:31.902595 | orchestrator | Friday 17 April 2026 06:11:19 +0000 (0:00:00.855) 0:16:22.399 ********** 2026-04-17 06:11:31.902605 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:31.902614 | orchestrator | 2026-04-17 06:11:31.902641 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:11:31.902652 | orchestrator | Friday 17 April 2026 06:11:19 +0000 (0:00:00.165) 0:16:22.565 ********** 2026-04-17 06:11:31.902661 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:11:31.902671 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:11:31.902681 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:11:31.902691 | orchestrator | 2026-04-17 06:11:31.902700 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-17 06:11:31.902710 | orchestrator | Friday 17 April 2026 06:11:21 +0000 (0:00:01.619) 0:16:24.184 ********** 2026-04-17 06:11:31.902719 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-04-17 06:11:31.902729 | orchestrator | 2026-04-17 06:11:31.902739 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-17 06:11:31.902748 | orchestrator | Friday 17 April 2026 06:11:22 +0000 (0:00:00.569) 0:16:24.754 ********** 2026-04-17 06:11:31.902758 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.902767 | orchestrator | 2026-04-17 06:11:31.902777 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-17 06:11:31.902786 | orchestrator | Friday 17 April 2026 06:11:22 +0000 (0:00:00.155) 0:16:24.909 ********** 2026-04-17 06:11:31.902816 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.902827 | orchestrator | 2026-04-17 06:11:31.902837 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-17 06:11:31.902846 | orchestrator | Friday 17 April 2026 06:11:22 +0000 (0:00:00.139) 0:16:25.049 ********** 2026-04-17 06:11:31.902856 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:31.902866 | orchestrator | 2026-04-17 06:11:31.902876 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-17 06:11:31.902886 | orchestrator | Friday 17 April 2026 06:11:22 +0000 (0:00:00.457) 0:16:25.506 ********** 2026-04-17 06:11:31.902895 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:31.902905 | orchestrator | 2026-04-17 06:11:31.902914 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-17 06:11:31.902924 | orchestrator | Friday 17 April 2026 06:11:22 +0000 (0:00:00.167) 0:16:25.674 ********** 2026-04-17 06:11:31.902934 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 06:11:31.902943 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 06:11:31.902953 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 06:11:31.902963 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 06:11:31.902972 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 06:11:31.902982 | orchestrator | 2026-04-17 06:11:31.902992 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-17 06:11:31.903001 | orchestrator | Friday 17 April 2026 06:11:25 +0000 (0:00:03.014) 0:16:28.688 ********** 2026-04-17 06:11:31.903011 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.903020 | orchestrator | 2026-04-17 06:11:31.903030 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-17 06:11:31.903039 | orchestrator | Friday 17 April 2026 06:11:26 +0000 (0:00:00.139) 0:16:28.828 ********** 2026-04-17 06:11:31.903049 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-04-17 06:11:31.903059 | orchestrator | 2026-04-17 06:11:31.903068 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-17 06:11:31.903085 | orchestrator | Friday 17 April 2026 06:11:26 +0000 (0:00:00.594) 0:16:29.423 ********** 2026-04-17 06:11:31.903095 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 06:11:31.903105 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-17 06:11:31.903115 | orchestrator | 2026-04-17 06:11:31.903124 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-17 06:11:31.903134 | orchestrator | Friday 17 April 2026 06:11:27 +0000 (0:00:00.810) 0:16:30.233 ********** 2026-04-17 06:11:31.903143 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:11:31.903153 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 06:11:31.903163 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:11:31.903172 | orchestrator | 2026-04-17 06:11:31.903182 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:11:31.903191 | orchestrator | Friday 17 April 2026 06:11:30 +0000 (0:00:02.566) 0:16:32.799 ********** 2026-04-17 06:11:31.903201 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-17 06:11:31.903211 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 06:11:31.903221 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:11:31.903230 | orchestrator | 2026-04-17 06:11:31.903240 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-17 06:11:31.903250 | orchestrator | Friday 17 April 2026 06:11:31 +0000 (0:00:01.297) 0:16:34.096 ********** 2026-04-17 06:11:31.903259 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.903269 | orchestrator | 2026-04-17 06:11:31.903284 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-17 06:11:31.903294 | orchestrator | Friday 17 April 2026 06:11:31 +0000 (0:00:00.257) 0:16:34.354 ********** 2026-04-17 06:11:31.903304 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.903313 | orchestrator | 2026-04-17 06:11:31.903323 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-17 06:11:31.903332 | orchestrator | Friday 17 April 2026 06:11:31 +0000 (0:00:00.151) 0:16:34.506 ********** 2026-04-17 06:11:31.903342 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:11:31.903352 | orchestrator | 2026-04-17 06:11:31.903367 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-17 06:12:06.342629 | orchestrator | Friday 17 April 2026 06:11:31 +0000 (0:00:00.133) 0:16:34.639 ********** 2026-04-17 06:12:06.342748 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-04-17 06:12:06.342765 | orchestrator | 2026-04-17 06:12:06.342777 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-17 06:12:06.342788 | orchestrator | Friday 17 April 2026 06:11:32 +0000 (0:00:00.595) 0:16:35.235 ********** 2026-04-17 06:12:06.342799 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:12:06.342811 | orchestrator | 2026-04-17 06:12:06.342822 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-17 06:12:06.342833 | orchestrator | Friday 17 April 2026 06:11:32 +0000 (0:00:00.463) 0:16:35.698 ********** 2026-04-17 06:12:06.342844 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:12:06.342855 | orchestrator | 2026-04-17 06:12:06.342865 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-17 06:12:06.342876 | orchestrator | Friday 17 April 2026 06:11:35 +0000 (0:00:02.552) 0:16:38.250 ********** 2026-04-17 06:12:06.342887 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-04-17 06:12:06.342958 | orchestrator | 2026-04-17 06:12:06.342974 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-17 06:12:06.342984 | orchestrator | Friday 17 April 2026 06:11:36 +0000 (0:00:00.602) 0:16:38.853 ********** 2026-04-17 06:12:06.342995 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:12:06.343006 | orchestrator | 2026-04-17 06:12:06.343017 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-17 06:12:06.343052 | orchestrator | Friday 17 April 2026 06:11:37 +0000 (0:00:01.034) 0:16:39.887 ********** 2026-04-17 06:12:06.343064 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:12:06.343074 | orchestrator | 2026-04-17 06:12:06.343085 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-17 06:12:06.343096 | orchestrator | Friday 17 April 2026 06:11:38 +0000 (0:00:00.949) 0:16:40.836 ********** 2026-04-17 06:12:06.343106 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:12:06.343117 | orchestrator | 2026-04-17 06:12:06.343127 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-17 06:12:06.343138 | orchestrator | Friday 17 April 2026 06:11:39 +0000 (0:00:01.240) 0:16:42.077 ********** 2026-04-17 06:12:06.343151 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343165 | orchestrator | 2026-04-17 06:12:06.343179 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-17 06:12:06.343192 | orchestrator | Friday 17 April 2026 06:11:39 +0000 (0:00:00.537) 0:16:42.615 ********** 2026-04-17 06:12:06.343204 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343216 | orchestrator | 2026-04-17 06:12:06.343226 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-17 06:12:06.343237 | orchestrator | Friday 17 April 2026 06:11:40 +0000 (0:00:00.211) 0:16:42.827 ********** 2026-04-17 06:12:06.343247 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 06:12:06.343258 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-17 06:12:06.343269 | orchestrator | 2026-04-17 06:12:06.343279 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-17 06:12:06.343290 | orchestrator | Friday 17 April 2026 06:11:40 +0000 (0:00:00.868) 0:16:43.695 ********** 2026-04-17 06:12:06.343300 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 06:12:06.343311 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-17 06:12:06.343321 | orchestrator | 2026-04-17 06:12:06.343332 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-17 06:12:06.343342 | orchestrator | Friday 17 April 2026 06:11:42 +0000 (0:00:01.906) 0:16:45.601 ********** 2026-04-17 06:12:06.343353 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-17 06:12:06.343364 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-17 06:12:06.343374 | orchestrator | 2026-04-17 06:12:06.343385 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-17 06:12:06.343395 | orchestrator | Friday 17 April 2026 06:11:46 +0000 (0:00:03.627) 0:16:49.229 ********** 2026-04-17 06:12:06.343406 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343416 | orchestrator | 2026-04-17 06:12:06.343427 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-17 06:12:06.343437 | orchestrator | Friday 17 April 2026 06:11:46 +0000 (0:00:00.248) 0:16:49.477 ********** 2026-04-17 06:12:06.343448 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343459 | orchestrator | 2026-04-17 06:12:06.343469 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-17 06:12:06.343480 | orchestrator | Friday 17 April 2026 06:11:46 +0000 (0:00:00.258) 0:16:49.736 ********** 2026-04-17 06:12:06.343490 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343501 | orchestrator | 2026-04-17 06:12:06.343511 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-17 06:12:06.343522 | orchestrator | Friday 17 April 2026 06:11:47 +0000 (0:00:00.332) 0:16:50.068 ********** 2026-04-17 06:12:06.343532 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343543 | orchestrator | 2026-04-17 06:12:06.343553 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-17 06:12:06.343564 | orchestrator | Friday 17 April 2026 06:11:47 +0000 (0:00:00.153) 0:16:50.222 ********** 2026-04-17 06:12:06.343574 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343585 | orchestrator | 2026-04-17 06:12:06.343609 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-17 06:12:06.343620 | orchestrator | Friday 17 April 2026 06:11:47 +0000 (0:00:00.123) 0:16:50.345 ********** 2026-04-17 06:12:06.343638 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-17 06:12:06.343650 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-17 06:12:06.343661 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-17 06:12:06.343689 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-17 06:12:06.343700 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:12:06.343711 | orchestrator | 2026-04-17 06:12:06.343722 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 06:12:06.343733 | orchestrator | Friday 17 April 2026 06:12:00 +0000 (0:00:13.214) 0:17:03.560 ********** 2026-04-17 06:12:06.343744 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343754 | orchestrator | 2026-04-17 06:12:06.343765 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 06:12:06.343776 | orchestrator | Friday 17 April 2026 06:12:01 +0000 (0:00:00.581) 0:17:04.141 ********** 2026-04-17 06:12:06.343787 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343797 | orchestrator | 2026-04-17 06:12:06.343808 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 06:12:06.343819 | orchestrator | Friday 17 April 2026 06:12:01 +0000 (0:00:00.141) 0:17:04.283 ********** 2026-04-17 06:12:06.343829 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343840 | orchestrator | 2026-04-17 06:12:06.343850 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 06:12:06.343861 | orchestrator | Friday 17 April 2026 06:12:01 +0000 (0:00:00.147) 0:17:04.430 ********** 2026-04-17 06:12:06.343872 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343882 | orchestrator | 2026-04-17 06:12:06.343893 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 06:12:06.343934 | orchestrator | Friday 17 April 2026 06:12:01 +0000 (0:00:00.136) 0:17:04.567 ********** 2026-04-17 06:12:06.343945 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343956 | orchestrator | 2026-04-17 06:12:06.343966 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-17 06:12:06.343977 | orchestrator | Friday 17 April 2026 06:12:01 +0000 (0:00:00.161) 0:17:04.729 ********** 2026-04-17 06:12:06.343987 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.343998 | orchestrator | 2026-04-17 06:12:06.344008 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 06:12:06.344019 | orchestrator | Friday 17 April 2026 06:12:02 +0000 (0:00:00.162) 0:17:04.891 ********** 2026-04-17 06:12:06.344029 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:12:06.344040 | orchestrator | 2026-04-17 06:12:06.344051 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-17 06:12:06.344061 | orchestrator | 2026-04-17 06:12:06.344072 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:12:06.344083 | orchestrator | Friday 17 April 2026 06:12:02 +0000 (0:00:00.600) 0:17:05.492 ********** 2026-04-17 06:12:06.344093 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-17 06:12:06.344103 | orchestrator | 2026-04-17 06:12:06.344114 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:12:06.344125 | orchestrator | Friday 17 April 2026 06:12:02 +0000 (0:00:00.244) 0:17:05.737 ********** 2026-04-17 06:12:06.344135 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:06.344146 | orchestrator | 2026-04-17 06:12:06.344156 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:12:06.344167 | orchestrator | Friday 17 April 2026 06:12:03 +0000 (0:00:00.489) 0:17:06.227 ********** 2026-04-17 06:12:06.344178 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:06.344204 | orchestrator | 2026-04-17 06:12:06.344215 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:12:06.344226 | orchestrator | Friday 17 April 2026 06:12:03 +0000 (0:00:00.150) 0:17:06.378 ********** 2026-04-17 06:12:06.344236 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:06.344247 | orchestrator | 2026-04-17 06:12:06.344257 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:12:06.344268 | orchestrator | Friday 17 April 2026 06:12:04 +0000 (0:00:00.451) 0:17:06.829 ********** 2026-04-17 06:12:06.344278 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:06.344289 | orchestrator | 2026-04-17 06:12:06.344300 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:12:06.344310 | orchestrator | Friday 17 April 2026 06:12:04 +0000 (0:00:00.545) 0:17:07.375 ********** 2026-04-17 06:12:06.344321 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:06.344331 | orchestrator | 2026-04-17 06:12:06.344342 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:12:06.344352 | orchestrator | Friday 17 April 2026 06:12:04 +0000 (0:00:00.148) 0:17:07.524 ********** 2026-04-17 06:12:06.344363 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:06.344373 | orchestrator | 2026-04-17 06:12:06.344384 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:12:06.344395 | orchestrator | Friday 17 April 2026 06:12:04 +0000 (0:00:00.192) 0:17:07.716 ********** 2026-04-17 06:12:06.344405 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:06.344416 | orchestrator | 2026-04-17 06:12:06.344427 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:12:06.344437 | orchestrator | Friday 17 April 2026 06:12:05 +0000 (0:00:00.152) 0:17:07.869 ********** 2026-04-17 06:12:06.344448 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:06.344459 | orchestrator | 2026-04-17 06:12:06.344469 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:12:06.344480 | orchestrator | Friday 17 April 2026 06:12:05 +0000 (0:00:00.152) 0:17:08.022 ********** 2026-04-17 06:12:06.344495 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:12:06.344507 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:12:06.344517 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:12:06.344528 | orchestrator | 2026-04-17 06:12:06.344539 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:12:06.344549 | orchestrator | Friday 17 April 2026 06:12:06 +0000 (0:00:00.787) 0:17:08.809 ********** 2026-04-17 06:12:06.344560 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:06.344571 | orchestrator | 2026-04-17 06:12:06.344589 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:12:14.206879 | orchestrator | Friday 17 April 2026 06:12:06 +0000 (0:00:00.274) 0:17:09.084 ********** 2026-04-17 06:12:14.207075 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:12:14.207107 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:12:14.207128 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:12:14.207146 | orchestrator | 2026-04-17 06:12:14.207159 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:12:14.207170 | orchestrator | Friday 17 April 2026 06:12:08 +0000 (0:00:01.874) 0:17:10.958 ********** 2026-04-17 06:12:14.207181 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 06:12:14.207193 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 06:12:14.207203 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 06:12:14.207214 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.207225 | orchestrator | 2026-04-17 06:12:14.207237 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:12:14.207273 | orchestrator | Friday 17 April 2026 06:12:08 +0000 (0:00:00.442) 0:17:11.401 ********** 2026-04-17 06:12:14.207286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:12:14.207303 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:12:14.207355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:12:14.207378 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.207396 | orchestrator | 2026-04-17 06:12:14.207410 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:12:14.207422 | orchestrator | Friday 17 April 2026 06:12:09 +0000 (0:00:01.107) 0:17:12.509 ********** 2026-04-17 06:12:14.207437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:14.207454 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:14.207467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:14.207479 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.207491 | orchestrator | 2026-04-17 06:12:14.207503 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:12:14.207516 | orchestrator | Friday 17 April 2026 06:12:09 +0000 (0:00:00.181) 0:17:12.690 ********** 2026-04-17 06:12:14.207559 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:12:06.872709', 'end': '2026-04-17 06:12:06.918734', 'delta': '0:00:00.046025', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:12:14.207597 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:12:07.433886', 'end': '2026-04-17 06:12:07.479636', 'delta': '0:00:00.045750', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:12:14.207622 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:12:08.012686', 'end': '2026-04-17 06:12:08.063167', 'delta': '0:00:00.050481', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:12:14.207634 | orchestrator | 2026-04-17 06:12:14.207647 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:12:14.207659 | orchestrator | Friday 17 April 2026 06:12:10 +0000 (0:00:00.217) 0:17:12.907 ********** 2026-04-17 06:12:14.207672 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:14.207685 | orchestrator | 2026-04-17 06:12:14.207697 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:12:14.207710 | orchestrator | Friday 17 April 2026 06:12:10 +0000 (0:00:00.268) 0:17:13.175 ********** 2026-04-17 06:12:14.207722 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.207734 | orchestrator | 2026-04-17 06:12:14.207747 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:12:14.207759 | orchestrator | Friday 17 April 2026 06:12:11 +0000 (0:00:01.119) 0:17:14.295 ********** 2026-04-17 06:12:14.207771 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:14.207782 | orchestrator | 2026-04-17 06:12:14.207793 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:12:14.207804 | orchestrator | Friday 17 April 2026 06:12:11 +0000 (0:00:00.151) 0:17:14.447 ********** 2026-04-17 06:12:14.207815 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:12:14.207826 | orchestrator | 2026-04-17 06:12:14.207836 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:12:14.207847 | orchestrator | Friday 17 April 2026 06:12:12 +0000 (0:00:01.002) 0:17:15.449 ********** 2026-04-17 06:12:14.207857 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:14.207868 | orchestrator | 2026-04-17 06:12:14.207879 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:12:14.207889 | orchestrator | Friday 17 April 2026 06:12:12 +0000 (0:00:00.155) 0:17:15.605 ********** 2026-04-17 06:12:14.207900 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.207911 | orchestrator | 2026-04-17 06:12:14.207955 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:12:14.207967 | orchestrator | Friday 17 April 2026 06:12:12 +0000 (0:00:00.140) 0:17:15.745 ********** 2026-04-17 06:12:14.207978 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.208003 | orchestrator | 2026-04-17 06:12:14.208014 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:12:14.208024 | orchestrator | Friday 17 April 2026 06:12:13 +0000 (0:00:00.233) 0:17:15.979 ********** 2026-04-17 06:12:14.208035 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.208046 | orchestrator | 2026-04-17 06:12:14.208056 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:12:14.208067 | orchestrator | Friday 17 April 2026 06:12:13 +0000 (0:00:00.158) 0:17:16.138 ********** 2026-04-17 06:12:14.208078 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.208088 | orchestrator | 2026-04-17 06:12:14.208107 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:12:14.208118 | orchestrator | Friday 17 April 2026 06:12:13 +0000 (0:00:00.161) 0:17:16.299 ********** 2026-04-17 06:12:14.208128 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:14.208139 | orchestrator | 2026-04-17 06:12:14.208155 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:12:14.208166 | orchestrator | Friday 17 April 2026 06:12:13 +0000 (0:00:00.195) 0:17:16.495 ********** 2026-04-17 06:12:14.208177 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.208187 | orchestrator | 2026-04-17 06:12:14.208198 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:12:14.208208 | orchestrator | Friday 17 April 2026 06:12:13 +0000 (0:00:00.136) 0:17:16.631 ********** 2026-04-17 06:12:14.208219 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:14.208230 | orchestrator | 2026-04-17 06:12:14.208240 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:12:14.208251 | orchestrator | Friday 17 April 2026 06:12:14 +0000 (0:00:00.178) 0:17:16.809 ********** 2026-04-17 06:12:14.208262 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:14.208273 | orchestrator | 2026-04-17 06:12:14.596669 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:12:14.596771 | orchestrator | Friday 17 April 2026 06:12:14 +0000 (0:00:00.137) 0:17:16.947 ********** 2026-04-17 06:12:14.596789 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:14.596803 | orchestrator | 2026-04-17 06:12:14.596815 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:12:14.596826 | orchestrator | Friday 17 April 2026 06:12:14 +0000 (0:00:00.187) 0:17:17.135 ********** 2026-04-17 06:12:14.596841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:12:14.596857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'uuids': ['0c9a4a4e-baea-4a48-b886-e6edd30675e6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB']}})  2026-04-17 06:12:14.596874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdcd9064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:12:14.596888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd']}})  2026-04-17 06:12:14.596971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:12:14.597003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:12:14.597036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:12:14.597050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:12:14.597062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM', 'dm-uuid-CRYPT-LUKS2-23d95080c3d748658de3cafbcbf22080-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:12:14.597073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:12:14.597085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'uuids': ['23d95080-c3d7-4865-8de3-cafbcbf22080'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM']}})  2026-04-17 06:12:14.597107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0']}})  2026-04-17 06:12:14.597124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:12:14.597151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '11ed6889', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:12:15.317090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:12:15.317219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:12:15.317277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB', 'dm-uuid-CRYPT-LUKS2-0c9a4a4ebaea4a48b886e6edd30675e6-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:12:15.317300 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:15.317321 | orchestrator | 2026-04-17 06:12:15.317340 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:12:15.317359 | orchestrator | Friday 17 April 2026 06:12:15 +0000 (0:00:00.704) 0:17:17.839 ********** 2026-04-17 06:12:15.317396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:15.317417 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'uuids': ['0c9a4a4e-baea-4a48-b886-e6edd30675e6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:15.317437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdcd9064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:15.317479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:15.317515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:15.317541 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:15.317561 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:15.317581 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:15.317610 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM', 'dm-uuid-CRYPT-LUKS2-23d95080c3d748658de3cafbcbf22080-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661613 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'uuids': ['23d95080-c3d7-4865-8de3-cafbcbf22080'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '11ed6889', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661743 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661756 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB', 'dm-uuid-CRYPT-LUKS2-0c9a4a4ebaea4a48b886e6edd30675e6-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:12:16.661781 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:16.661794 | orchestrator | 2026-04-17 06:12:16.661806 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:12:16.661818 | orchestrator | Friday 17 April 2026 06:12:15 +0000 (0:00:00.437) 0:17:18.277 ********** 2026-04-17 06:12:16.661829 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:16.661841 | orchestrator | 2026-04-17 06:12:16.661852 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:12:16.661870 | orchestrator | Friday 17 April 2026 06:12:16 +0000 (0:00:00.486) 0:17:18.763 ********** 2026-04-17 06:12:16.661881 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:16.661892 | orchestrator | 2026-04-17 06:12:16.661903 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:12:16.661914 | orchestrator | Friday 17 April 2026 06:12:16 +0000 (0:00:00.146) 0:17:18.909 ********** 2026-04-17 06:12:16.661981 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:16.661994 | orchestrator | 2026-04-17 06:12:16.662005 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:12:16.662086 | orchestrator | Friday 17 April 2026 06:12:16 +0000 (0:00:00.492) 0:17:19.402 ********** 2026-04-17 06:12:32.197573 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.197721 | orchestrator | 2026-04-17 06:12:32.197759 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:12:32.197781 | orchestrator | Friday 17 April 2026 06:12:16 +0000 (0:00:00.176) 0:17:19.579 ********** 2026-04-17 06:12:32.197798 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.197809 | orchestrator | 2026-04-17 06:12:32.197821 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:12:32.197832 | orchestrator | Friday 17 April 2026 06:12:17 +0000 (0:00:00.302) 0:17:19.882 ********** 2026-04-17 06:12:32.197842 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.197853 | orchestrator | 2026-04-17 06:12:32.197864 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:12:32.197875 | orchestrator | Friday 17 April 2026 06:12:17 +0000 (0:00:00.167) 0:17:20.050 ********** 2026-04-17 06:12:32.197887 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-17 06:12:32.197899 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-17 06:12:32.197909 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-17 06:12:32.197920 | orchestrator | 2026-04-17 06:12:32.197931 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:12:32.197942 | orchestrator | Friday 17 April 2026 06:12:18 +0000 (0:00:01.156) 0:17:21.207 ********** 2026-04-17 06:12:32.197953 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 06:12:32.197964 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 06:12:32.198008 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 06:12:32.198077 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.198090 | orchestrator | 2026-04-17 06:12:32.198101 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:12:32.198113 | orchestrator | Friday 17 April 2026 06:12:18 +0000 (0:00:00.189) 0:17:21.397 ********** 2026-04-17 06:12:32.198126 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-17 06:12:32.198138 | orchestrator | 2026-04-17 06:12:32.198151 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:12:32.198165 | orchestrator | Friday 17 April 2026 06:12:18 +0000 (0:00:00.221) 0:17:21.618 ********** 2026-04-17 06:12:32.198177 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.198189 | orchestrator | 2026-04-17 06:12:32.198218 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:12:32.198231 | orchestrator | Friday 17 April 2026 06:12:19 +0000 (0:00:00.158) 0:17:21.777 ********** 2026-04-17 06:12:32.198243 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.198255 | orchestrator | 2026-04-17 06:12:32.198267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:12:32.198280 | orchestrator | Friday 17 April 2026 06:12:19 +0000 (0:00:00.524) 0:17:22.302 ********** 2026-04-17 06:12:32.198294 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.198320 | orchestrator | 2026-04-17 06:12:32.198344 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:12:32.198357 | orchestrator | Friday 17 April 2026 06:12:19 +0000 (0:00:00.160) 0:17:22.462 ********** 2026-04-17 06:12:32.198395 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:32.198407 | orchestrator | 2026-04-17 06:12:32.198418 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:12:32.198429 | orchestrator | Friday 17 April 2026 06:12:19 +0000 (0:00:00.268) 0:17:22.730 ********** 2026-04-17 06:12:32.198440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:12:32.198450 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:12:32.198461 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:12:32.198472 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.198483 | orchestrator | 2026-04-17 06:12:32.198494 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:12:32.198505 | orchestrator | Friday 17 April 2026 06:12:20 +0000 (0:00:00.475) 0:17:23.206 ********** 2026-04-17 06:12:32.198516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:12:32.198527 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:12:32.198537 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:12:32.198548 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.198559 | orchestrator | 2026-04-17 06:12:32.198569 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:12:32.198580 | orchestrator | Friday 17 April 2026 06:12:20 +0000 (0:00:00.434) 0:17:23.641 ********** 2026-04-17 06:12:32.198591 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:12:32.198601 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:12:32.198612 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:12:32.198623 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.198633 | orchestrator | 2026-04-17 06:12:32.198644 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:12:32.198655 | orchestrator | Friday 17 April 2026 06:12:21 +0000 (0:00:00.435) 0:17:24.076 ********** 2026-04-17 06:12:32.198665 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:32.198676 | orchestrator | 2026-04-17 06:12:32.198687 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:12:32.198698 | orchestrator | Friday 17 April 2026 06:12:21 +0000 (0:00:00.176) 0:17:24.253 ********** 2026-04-17 06:12:32.198715 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 06:12:32.198736 | orchestrator | 2026-04-17 06:12:32.198758 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:12:32.198779 | orchestrator | Friday 17 April 2026 06:12:21 +0000 (0:00:00.345) 0:17:24.598 ********** 2026-04-17 06:12:32.198811 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:12:32.198822 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:12:32.198833 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:12:32.198843 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:12:32.198854 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-17 06:12:32.198865 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:12:32.198875 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:12:32.198886 | orchestrator | 2026-04-17 06:12:32.198896 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:12:32.198907 | orchestrator | Friday 17 April 2026 06:12:23 +0000 (0:00:01.265) 0:17:25.864 ********** 2026-04-17 06:12:32.198917 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:12:32.198928 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:12:32.198951 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:12:32.198962 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:12:32.198999 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-17 06:12:32.199011 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:12:32.199021 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:12:32.199032 | orchestrator | 2026-04-17 06:12:32.199042 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-17 06:12:32.199053 | orchestrator | Friday 17 April 2026 06:12:24 +0000 (0:00:01.802) 0:17:27.666 ********** 2026-04-17 06:12:32.199064 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:32.199074 | orchestrator | 2026-04-17 06:12:32.199085 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-17 06:12:32.199096 | orchestrator | Friday 17 April 2026 06:12:25 +0000 (0:00:00.554) 0:17:28.221 ********** 2026-04-17 06:12:32.199107 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:32.199118 | orchestrator | 2026-04-17 06:12:32.199134 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-17 06:12:32.199146 | orchestrator | Friday 17 April 2026 06:12:25 +0000 (0:00:00.144) 0:17:28.365 ********** 2026-04-17 06:12:32.199156 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:32.199167 | orchestrator | 2026-04-17 06:12:32.199177 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-17 06:12:32.199188 | orchestrator | Friday 17 April 2026 06:12:25 +0000 (0:00:00.236) 0:17:28.602 ********** 2026-04-17 06:12:32.199199 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-17 06:12:32.199209 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-17 06:12:32.199220 | orchestrator | 2026-04-17 06:12:32.199231 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:12:32.199242 | orchestrator | Friday 17 April 2026 06:12:29 +0000 (0:00:03.612) 0:17:32.215 ********** 2026-04-17 06:12:32.199252 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-17 06:12:32.199263 | orchestrator | 2026-04-17 06:12:32.199274 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:12:32.199284 | orchestrator | Friday 17 April 2026 06:12:29 +0000 (0:00:00.230) 0:17:32.445 ********** 2026-04-17 06:12:32.199295 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-17 06:12:32.199306 | orchestrator | 2026-04-17 06:12:32.199316 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:12:32.199327 | orchestrator | Friday 17 April 2026 06:12:29 +0000 (0:00:00.224) 0:17:32.669 ********** 2026-04-17 06:12:32.199338 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.199348 | orchestrator | 2026-04-17 06:12:32.199359 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:12:32.199370 | orchestrator | Friday 17 April 2026 06:12:30 +0000 (0:00:00.148) 0:17:32.818 ********** 2026-04-17 06:12:32.199380 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:32.199391 | orchestrator | 2026-04-17 06:12:32.199402 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:12:32.199413 | orchestrator | Friday 17 April 2026 06:12:30 +0000 (0:00:00.503) 0:17:33.322 ********** 2026-04-17 06:12:32.199423 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:32.199434 | orchestrator | 2026-04-17 06:12:32.199445 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:12:32.199455 | orchestrator | Friday 17 April 2026 06:12:31 +0000 (0:00:00.545) 0:17:33.867 ********** 2026-04-17 06:12:32.199466 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:32.199477 | orchestrator | 2026-04-17 06:12:32.199487 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:12:32.199498 | orchestrator | Friday 17 April 2026 06:12:31 +0000 (0:00:00.564) 0:17:34.432 ********** 2026-04-17 06:12:32.199515 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.199526 | orchestrator | 2026-04-17 06:12:32.199537 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:12:32.199548 | orchestrator | Friday 17 April 2026 06:12:31 +0000 (0:00:00.148) 0:17:34.580 ********** 2026-04-17 06:12:32.199559 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.199569 | orchestrator | 2026-04-17 06:12:32.199580 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:12:32.199591 | orchestrator | Friday 17 April 2026 06:12:32 +0000 (0:00:00.176) 0:17:34.756 ********** 2026-04-17 06:12:32.199601 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:32.199612 | orchestrator | 2026-04-17 06:12:32.199630 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:12:44.201865 | orchestrator | Friday 17 April 2026 06:12:32 +0000 (0:00:00.177) 0:17:34.934 ********** 2026-04-17 06:12:44.202163 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.202232 | orchestrator | 2026-04-17 06:12:44.202248 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:12:44.202259 | orchestrator | Friday 17 April 2026 06:12:32 +0000 (0:00:00.510) 0:17:35.444 ********** 2026-04-17 06:12:44.202271 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.202283 | orchestrator | 2026-04-17 06:12:44.202294 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:12:44.202305 | orchestrator | Friday 17 April 2026 06:12:33 +0000 (0:00:00.885) 0:17:36.330 ********** 2026-04-17 06:12:44.202317 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202328 | orchestrator | 2026-04-17 06:12:44.202339 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:12:44.202350 | orchestrator | Friday 17 April 2026 06:12:33 +0000 (0:00:00.133) 0:17:36.463 ********** 2026-04-17 06:12:44.202361 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202372 | orchestrator | 2026-04-17 06:12:44.202382 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:12:44.202394 | orchestrator | Friday 17 April 2026 06:12:33 +0000 (0:00:00.141) 0:17:36.605 ********** 2026-04-17 06:12:44.202407 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.202419 | orchestrator | 2026-04-17 06:12:44.202431 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:12:44.202444 | orchestrator | Friday 17 April 2026 06:12:34 +0000 (0:00:00.163) 0:17:36.768 ********** 2026-04-17 06:12:44.202456 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.202468 | orchestrator | 2026-04-17 06:12:44.202480 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:12:44.202492 | orchestrator | Friday 17 April 2026 06:12:34 +0000 (0:00:00.156) 0:17:36.924 ********** 2026-04-17 06:12:44.202504 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.202516 | orchestrator | 2026-04-17 06:12:44.202529 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:12:44.202541 | orchestrator | Friday 17 April 2026 06:12:34 +0000 (0:00:00.171) 0:17:37.095 ********** 2026-04-17 06:12:44.202554 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202566 | orchestrator | 2026-04-17 06:12:44.202578 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:12:44.202591 | orchestrator | Friday 17 April 2026 06:12:34 +0000 (0:00:00.146) 0:17:37.242 ********** 2026-04-17 06:12:44.202603 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202616 | orchestrator | 2026-04-17 06:12:44.202645 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:12:44.202657 | orchestrator | Friday 17 April 2026 06:12:34 +0000 (0:00:00.150) 0:17:37.392 ********** 2026-04-17 06:12:44.202670 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202682 | orchestrator | 2026-04-17 06:12:44.202695 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:12:44.202707 | orchestrator | Friday 17 April 2026 06:12:34 +0000 (0:00:00.162) 0:17:37.554 ********** 2026-04-17 06:12:44.202743 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.202756 | orchestrator | 2026-04-17 06:12:44.202769 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:12:44.202781 | orchestrator | Friday 17 April 2026 06:12:34 +0000 (0:00:00.158) 0:17:37.713 ********** 2026-04-17 06:12:44.202793 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.202805 | orchestrator | 2026-04-17 06:12:44.202818 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:12:44.202830 | orchestrator | Friday 17 April 2026 06:12:35 +0000 (0:00:00.215) 0:17:37.928 ********** 2026-04-17 06:12:44.202841 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202851 | orchestrator | 2026-04-17 06:12:44.202862 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:12:44.202873 | orchestrator | Friday 17 April 2026 06:12:35 +0000 (0:00:00.170) 0:17:38.099 ********** 2026-04-17 06:12:44.202884 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202894 | orchestrator | 2026-04-17 06:12:44.202905 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:12:44.202916 | orchestrator | Friday 17 April 2026 06:12:35 +0000 (0:00:00.544) 0:17:38.643 ********** 2026-04-17 06:12:44.202927 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202938 | orchestrator | 2026-04-17 06:12:44.202949 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:12:44.202959 | orchestrator | Friday 17 April 2026 06:12:36 +0000 (0:00:00.142) 0:17:38.786 ********** 2026-04-17 06:12:44.202970 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.202981 | orchestrator | 2026-04-17 06:12:44.202992 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:12:44.203033 | orchestrator | Friday 17 April 2026 06:12:36 +0000 (0:00:00.145) 0:17:38.931 ********** 2026-04-17 06:12:44.203050 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203061 | orchestrator | 2026-04-17 06:12:44.203072 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:12:44.203083 | orchestrator | Friday 17 April 2026 06:12:36 +0000 (0:00:00.126) 0:17:39.057 ********** 2026-04-17 06:12:44.203094 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203104 | orchestrator | 2026-04-17 06:12:44.203115 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:12:44.203126 | orchestrator | Friday 17 April 2026 06:12:36 +0000 (0:00:00.144) 0:17:39.202 ********** 2026-04-17 06:12:44.203136 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203147 | orchestrator | 2026-04-17 06:12:44.203158 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:12:44.203169 | orchestrator | Friday 17 April 2026 06:12:36 +0000 (0:00:00.134) 0:17:39.337 ********** 2026-04-17 06:12:44.203180 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203191 | orchestrator | 2026-04-17 06:12:44.203201 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:12:44.203212 | orchestrator | Friday 17 April 2026 06:12:36 +0000 (0:00:00.149) 0:17:39.486 ********** 2026-04-17 06:12:44.203242 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203254 | orchestrator | 2026-04-17 06:12:44.203265 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:12:44.203276 | orchestrator | Friday 17 April 2026 06:12:36 +0000 (0:00:00.130) 0:17:39.616 ********** 2026-04-17 06:12:44.203286 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203296 | orchestrator | 2026-04-17 06:12:44.203307 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:12:44.203317 | orchestrator | Friday 17 April 2026 06:12:37 +0000 (0:00:00.136) 0:17:39.753 ********** 2026-04-17 06:12:44.203328 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203339 | orchestrator | 2026-04-17 06:12:44.203349 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:12:44.203360 | orchestrator | Friday 17 April 2026 06:12:37 +0000 (0:00:00.122) 0:17:39.875 ********** 2026-04-17 06:12:44.203380 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203391 | orchestrator | 2026-04-17 06:12:44.203401 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:12:44.203412 | orchestrator | Friday 17 April 2026 06:12:37 +0000 (0:00:00.215) 0:17:40.091 ********** 2026-04-17 06:12:44.203422 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.203433 | orchestrator | 2026-04-17 06:12:44.203443 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:12:44.203454 | orchestrator | Friday 17 April 2026 06:12:38 +0000 (0:00:01.000) 0:17:41.091 ********** 2026-04-17 06:12:44.203465 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.203481 | orchestrator | 2026-04-17 06:12:44.203499 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:12:44.203517 | orchestrator | Friday 17 April 2026 06:12:40 +0000 (0:00:01.658) 0:17:42.750 ********** 2026-04-17 06:12:44.203535 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-17 06:12:44.203553 | orchestrator | 2026-04-17 06:12:44.203572 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:12:44.203591 | orchestrator | Friday 17 April 2026 06:12:40 +0000 (0:00:00.233) 0:17:42.983 ********** 2026-04-17 06:12:44.203610 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203628 | orchestrator | 2026-04-17 06:12:44.203644 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:12:44.203655 | orchestrator | Friday 17 April 2026 06:12:40 +0000 (0:00:00.163) 0:17:43.147 ********** 2026-04-17 06:12:44.203666 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203677 | orchestrator | 2026-04-17 06:12:44.203695 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:12:44.203707 | orchestrator | Friday 17 April 2026 06:12:40 +0000 (0:00:00.158) 0:17:43.305 ********** 2026-04-17 06:12:44.203717 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:12:44.203728 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:12:44.203739 | orchestrator | 2026-04-17 06:12:44.203749 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:12:44.203760 | orchestrator | Friday 17 April 2026 06:12:41 +0000 (0:00:00.843) 0:17:44.149 ********** 2026-04-17 06:12:44.203771 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.203781 | orchestrator | 2026-04-17 06:12:44.203792 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:12:44.203802 | orchestrator | Friday 17 April 2026 06:12:41 +0000 (0:00:00.472) 0:17:44.622 ********** 2026-04-17 06:12:44.203813 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203824 | orchestrator | 2026-04-17 06:12:44.203834 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:12:44.203845 | orchestrator | Friday 17 April 2026 06:12:42 +0000 (0:00:00.245) 0:17:44.867 ********** 2026-04-17 06:12:44.203855 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203866 | orchestrator | 2026-04-17 06:12:44.203877 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:12:44.203887 | orchestrator | Friday 17 April 2026 06:12:42 +0000 (0:00:00.173) 0:17:45.040 ********** 2026-04-17 06:12:44.203898 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.203909 | orchestrator | 2026-04-17 06:12:44.203919 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:12:44.203930 | orchestrator | Friday 17 April 2026 06:12:42 +0000 (0:00:00.142) 0:17:45.182 ********** 2026-04-17 06:12:44.203940 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-17 06:12:44.203951 | orchestrator | 2026-04-17 06:12:44.203961 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:12:44.203972 | orchestrator | Friday 17 April 2026 06:12:42 +0000 (0:00:00.226) 0:17:45.408 ********** 2026-04-17 06:12:44.203991 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:12:44.204039 | orchestrator | 2026-04-17 06:12:44.204052 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:12:44.204063 | orchestrator | Friday 17 April 2026 06:12:43 +0000 (0:00:00.741) 0:17:46.150 ********** 2026-04-17 06:12:44.204073 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:12:44.204084 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:12:44.204095 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:12:44.204105 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.204116 | orchestrator | 2026-04-17 06:12:44.204127 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:12:44.204138 | orchestrator | Friday 17 April 2026 06:12:43 +0000 (0:00:00.145) 0:17:46.296 ********** 2026-04-17 06:12:44.204148 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:12:44.204159 | orchestrator | 2026-04-17 06:12:44.204170 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:12:44.204180 | orchestrator | Friday 17 April 2026 06:12:44 +0000 (0:00:00.539) 0:17:46.835 ********** 2026-04-17 06:12:44.204201 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.922657 | orchestrator | 2026-04-17 06:13:01.922772 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:13:01.922789 | orchestrator | Friday 17 April 2026 06:12:44 +0000 (0:00:00.199) 0:17:47.035 ********** 2026-04-17 06:13:01.922802 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.922814 | orchestrator | 2026-04-17 06:13:01.922825 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:13:01.922837 | orchestrator | Friday 17 April 2026 06:12:44 +0000 (0:00:00.177) 0:17:47.212 ********** 2026-04-17 06:13:01.922848 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.922858 | orchestrator | 2026-04-17 06:13:01.922869 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:13:01.922880 | orchestrator | Friday 17 April 2026 06:12:44 +0000 (0:00:00.166) 0:17:47.378 ********** 2026-04-17 06:13:01.922891 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.922902 | orchestrator | 2026-04-17 06:13:01.922913 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:13:01.922924 | orchestrator | Friday 17 April 2026 06:12:44 +0000 (0:00:00.158) 0:17:47.537 ********** 2026-04-17 06:13:01.922935 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:01.922947 | orchestrator | 2026-04-17 06:13:01.922958 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:13:01.922970 | orchestrator | Friday 17 April 2026 06:12:46 +0000 (0:00:01.457) 0:17:48.995 ********** 2026-04-17 06:13:01.922981 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:01.922992 | orchestrator | 2026-04-17 06:13:01.923003 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:13:01.923014 | orchestrator | Friday 17 April 2026 06:12:46 +0000 (0:00:00.154) 0:17:49.150 ********** 2026-04-17 06:13:01.923024 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-17 06:13:01.923035 | orchestrator | 2026-04-17 06:13:01.923088 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:13:01.923099 | orchestrator | Friday 17 April 2026 06:12:46 +0000 (0:00:00.275) 0:17:49.425 ********** 2026-04-17 06:13:01.923110 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.923121 | orchestrator | 2026-04-17 06:13:01.923132 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:13:01.923143 | orchestrator | Friday 17 April 2026 06:12:46 +0000 (0:00:00.156) 0:17:49.581 ********** 2026-04-17 06:13:01.923154 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.923165 | orchestrator | 2026-04-17 06:13:01.923193 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:13:01.923229 | orchestrator | Friday 17 April 2026 06:12:46 +0000 (0:00:00.156) 0:17:49.738 ********** 2026-04-17 06:13:01.923243 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.923256 | orchestrator | 2026-04-17 06:13:01.923269 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:13:01.923281 | orchestrator | Friday 17 April 2026 06:12:47 +0000 (0:00:00.154) 0:17:49.892 ********** 2026-04-17 06:13:01.923294 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.923306 | orchestrator | 2026-04-17 06:13:01.923319 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:13:01.923331 | orchestrator | Friday 17 April 2026 06:12:47 +0000 (0:00:00.144) 0:17:50.037 ********** 2026-04-17 06:13:01.923344 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.923356 | orchestrator | 2026-04-17 06:13:01.923369 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:13:01.923382 | orchestrator | Friday 17 April 2026 06:12:47 +0000 (0:00:00.575) 0:17:50.613 ********** 2026-04-17 06:13:01.923395 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.923407 | orchestrator | 2026-04-17 06:13:01.923420 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:13:01.923433 | orchestrator | Friday 17 April 2026 06:12:48 +0000 (0:00:00.167) 0:17:50.780 ********** 2026-04-17 06:13:01.923445 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.923457 | orchestrator | 2026-04-17 06:13:01.923470 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:13:01.923482 | orchestrator | Friday 17 April 2026 06:12:48 +0000 (0:00:00.247) 0:17:51.028 ********** 2026-04-17 06:13:01.923494 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.923507 | orchestrator | 2026-04-17 06:13:01.923519 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:13:01.923532 | orchestrator | Friday 17 April 2026 06:12:48 +0000 (0:00:00.174) 0:17:51.203 ********** 2026-04-17 06:13:01.923544 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:01.923554 | orchestrator | 2026-04-17 06:13:01.923565 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:13:01.923576 | orchestrator | Friday 17 April 2026 06:12:48 +0000 (0:00:00.265) 0:17:51.468 ********** 2026-04-17 06:13:01.923587 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-17 06:13:01.923599 | orchestrator | 2026-04-17 06:13:01.923610 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:13:01.923620 | orchestrator | Friday 17 April 2026 06:12:48 +0000 (0:00:00.224) 0:17:51.693 ********** 2026-04-17 06:13:01.923631 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-17 06:13:01.923643 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-17 06:13:01.923654 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-17 06:13:01.923664 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-17 06:13:01.923675 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-17 06:13:01.923686 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-17 06:13:01.923697 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-17 06:13:01.923707 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:13:01.923719 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:13:01.923747 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:13:01.923759 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:13:01.923769 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:13:01.923780 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:13:01.923791 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:13:01.923801 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-17 06:13:01.923820 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-17 06:13:01.923831 | orchestrator | 2026-04-17 06:13:01.923842 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:13:01.923852 | orchestrator | Friday 17 April 2026 06:12:54 +0000 (0:00:05.577) 0:17:57.271 ********** 2026-04-17 06:13:01.923863 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-17 06:13:01.923874 | orchestrator | 2026-04-17 06:13:01.923884 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 06:13:01.923895 | orchestrator | Friday 17 April 2026 06:12:54 +0000 (0:00:00.215) 0:17:57.487 ********** 2026-04-17 06:13:01.923906 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:13:01.923918 | orchestrator | 2026-04-17 06:13:01.923928 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 06:13:01.923939 | orchestrator | Friday 17 April 2026 06:12:55 +0000 (0:00:00.510) 0:17:57.998 ********** 2026-04-17 06:13:01.923950 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:13:01.923960 | orchestrator | 2026-04-17 06:13:01.923971 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:13:01.923982 | orchestrator | Friday 17 April 2026 06:12:56 +0000 (0:00:00.958) 0:17:58.957 ********** 2026-04-17 06:13:01.923992 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924003 | orchestrator | 2026-04-17 06:13:01.924014 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:13:01.924025 | orchestrator | Friday 17 April 2026 06:12:56 +0000 (0:00:00.591) 0:17:59.549 ********** 2026-04-17 06:13:01.924035 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924066 | orchestrator | 2026-04-17 06:13:01.924082 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:13:01.924093 | orchestrator | Friday 17 April 2026 06:12:56 +0000 (0:00:00.143) 0:17:59.692 ********** 2026-04-17 06:13:01.924104 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924114 | orchestrator | 2026-04-17 06:13:01.924125 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:13:01.924136 | orchestrator | Friday 17 April 2026 06:12:57 +0000 (0:00:00.155) 0:17:59.847 ********** 2026-04-17 06:13:01.924146 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924157 | orchestrator | 2026-04-17 06:13:01.924168 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:13:01.924178 | orchestrator | Friday 17 April 2026 06:12:57 +0000 (0:00:00.153) 0:18:00.000 ********** 2026-04-17 06:13:01.924189 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924200 | orchestrator | 2026-04-17 06:13:01.924210 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:13:01.924221 | orchestrator | Friday 17 April 2026 06:12:57 +0000 (0:00:00.147) 0:18:00.148 ********** 2026-04-17 06:13:01.924232 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924243 | orchestrator | 2026-04-17 06:13:01.924254 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:13:01.924264 | orchestrator | Friday 17 April 2026 06:12:57 +0000 (0:00:00.140) 0:18:00.288 ********** 2026-04-17 06:13:01.924275 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924286 | orchestrator | 2026-04-17 06:13:01.924297 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:13:01.924308 | orchestrator | Friday 17 April 2026 06:12:57 +0000 (0:00:00.157) 0:18:00.445 ********** 2026-04-17 06:13:01.924318 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924329 | orchestrator | 2026-04-17 06:13:01.924340 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:13:01.924359 | orchestrator | Friday 17 April 2026 06:12:57 +0000 (0:00:00.166) 0:18:00.614 ********** 2026-04-17 06:13:01.924370 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924381 | orchestrator | 2026-04-17 06:13:01.924392 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:13:01.924402 | orchestrator | Friday 17 April 2026 06:12:58 +0000 (0:00:00.171) 0:18:00.785 ********** 2026-04-17 06:13:01.924413 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:01.924424 | orchestrator | 2026-04-17 06:13:01.924434 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:13:01.924445 | orchestrator | Friday 17 April 2026 06:12:58 +0000 (0:00:00.161) 0:18:00.947 ********** 2026-04-17 06:13:01.924456 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:01.924467 | orchestrator | 2026-04-17 06:13:01.924477 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:13:01.924488 | orchestrator | Friday 17 April 2026 06:12:58 +0000 (0:00:00.216) 0:18:01.164 ********** 2026-04-17 06:13:01.924498 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:13:01.924509 | orchestrator | 2026-04-17 06:13:01.924520 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:13:01.924530 | orchestrator | Friday 17 April 2026 06:13:01 +0000 (0:00:03.370) 0:18:04.534 ********** 2026-04-17 06:13:01.924548 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:13:23.725709 | orchestrator | 2026-04-17 06:13:23.725824 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:13:23.725842 | orchestrator | Friday 17 April 2026 06:13:02 +0000 (0:00:00.235) 0:18:04.770 ********** 2026-04-17 06:13:23.725856 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-17 06:13:23.725871 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-17 06:13:23.725885 | orchestrator | 2026-04-17 06:13:23.725897 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:13:23.725908 | orchestrator | Friday 17 April 2026 06:13:09 +0000 (0:00:07.627) 0:18:12.397 ********** 2026-04-17 06:13:23.725919 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.725931 | orchestrator | 2026-04-17 06:13:23.725942 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:13:23.725953 | orchestrator | Friday 17 April 2026 06:13:09 +0000 (0:00:00.163) 0:18:12.561 ********** 2026-04-17 06:13:23.725964 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.725975 | orchestrator | 2026-04-17 06:13:23.725986 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:13:23.725999 | orchestrator | Friday 17 April 2026 06:13:09 +0000 (0:00:00.130) 0:18:12.691 ********** 2026-04-17 06:13:23.726010 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.726086 | orchestrator | 2026-04-17 06:13:23.726130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:13:23.726157 | orchestrator | Friday 17 April 2026 06:13:10 +0000 (0:00:00.158) 0:18:12.849 ********** 2026-04-17 06:13:23.726169 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.726180 | orchestrator | 2026-04-17 06:13:23.726224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:13:23.726236 | orchestrator | Friday 17 April 2026 06:13:10 +0000 (0:00:00.191) 0:18:13.041 ********** 2026-04-17 06:13:23.726269 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.726282 | orchestrator | 2026-04-17 06:13:23.726294 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:13:23.726320 | orchestrator | Friday 17 April 2026 06:13:10 +0000 (0:00:00.182) 0:18:13.224 ********** 2026-04-17 06:13:23.726332 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:23.726346 | orchestrator | 2026-04-17 06:13:23.726358 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:13:23.726370 | orchestrator | Friday 17 April 2026 06:13:10 +0000 (0:00:00.248) 0:18:13.473 ********** 2026-04-17 06:13:23.726383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:13:23.726396 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:13:23.726408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:13:23.726421 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.726433 | orchestrator | 2026-04-17 06:13:23.726447 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:13:23.726459 | orchestrator | Friday 17 April 2026 06:13:11 +0000 (0:00:00.463) 0:18:13.937 ********** 2026-04-17 06:13:23.726472 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:13:23.726484 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:13:23.726496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:13:23.726508 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.726520 | orchestrator | 2026-04-17 06:13:23.726532 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:13:23.726545 | orchestrator | Friday 17 April 2026 06:13:11 +0000 (0:00:00.449) 0:18:14.386 ********** 2026-04-17 06:13:23.726557 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:13:23.726569 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:13:23.726581 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:13:23.726594 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.726606 | orchestrator | 2026-04-17 06:13:23.726617 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:13:23.726628 | orchestrator | Friday 17 April 2026 06:13:12 +0000 (0:00:00.517) 0:18:14.904 ********** 2026-04-17 06:13:23.726639 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:23.726649 | orchestrator | 2026-04-17 06:13:23.726660 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:13:23.726671 | orchestrator | Friday 17 April 2026 06:13:12 +0000 (0:00:00.178) 0:18:15.082 ********** 2026-04-17 06:13:23.726682 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 06:13:23.726693 | orchestrator | 2026-04-17 06:13:23.726704 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:13:23.726715 | orchestrator | Friday 17 April 2026 06:13:12 +0000 (0:00:00.473) 0:18:15.556 ********** 2026-04-17 06:13:23.726725 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:13:23.726736 | orchestrator | 2026-04-17 06:13:23.726747 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-17 06:13:23.726758 | orchestrator | Friday 17 April 2026 06:13:14 +0000 (0:00:01.827) 0:18:17.383 ********** 2026-04-17 06:13:23.726769 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:23.726779 | orchestrator | 2026-04-17 06:13:23.726809 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:13:23.726821 | orchestrator | Friday 17 April 2026 06:13:14 +0000 (0:00:00.155) 0:18:17.539 ********** 2026-04-17 06:13:23.726832 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:13:23.726843 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:13:23.726854 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:13:23.726874 | orchestrator | 2026-04-17 06:13:23.726885 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-17 06:13:23.726896 | orchestrator | Friday 17 April 2026 06:13:15 +0000 (0:00:00.746) 0:18:18.285 ********** 2026-04-17 06:13:23.726907 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-04-17 06:13:23.726917 | orchestrator | 2026-04-17 06:13:23.726928 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-17 06:13:23.726939 | orchestrator | Friday 17 April 2026 06:13:15 +0000 (0:00:00.204) 0:18:18.490 ********** 2026-04-17 06:13:23.726949 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.726960 | orchestrator | 2026-04-17 06:13:23.726971 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-17 06:13:23.726982 | orchestrator | Friday 17 April 2026 06:13:15 +0000 (0:00:00.151) 0:18:18.642 ********** 2026-04-17 06:13:23.726992 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.727003 | orchestrator | 2026-04-17 06:13:23.727014 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-17 06:13:23.727024 | orchestrator | Friday 17 April 2026 06:13:16 +0000 (0:00:00.137) 0:18:18.779 ********** 2026-04-17 06:13:23.727035 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:23.727046 | orchestrator | 2026-04-17 06:13:23.727057 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-17 06:13:23.727068 | orchestrator | Friday 17 April 2026 06:13:16 +0000 (0:00:00.442) 0:18:19.221 ********** 2026-04-17 06:13:23.727078 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:23.727089 | orchestrator | 2026-04-17 06:13:23.727119 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-17 06:13:23.727130 | orchestrator | Friday 17 April 2026 06:13:16 +0000 (0:00:00.157) 0:18:19.379 ********** 2026-04-17 06:13:23.727147 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 06:13:23.727158 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 06:13:23.727169 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 06:13:23.727180 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 06:13:23.727191 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 06:13:23.727202 | orchestrator | 2026-04-17 06:13:23.727212 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-17 06:13:23.727223 | orchestrator | Friday 17 April 2026 06:13:18 +0000 (0:00:01.814) 0:18:21.193 ********** 2026-04-17 06:13:23.727234 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.727245 | orchestrator | 2026-04-17 06:13:23.727256 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-17 06:13:23.727266 | orchestrator | Friday 17 April 2026 06:13:18 +0000 (0:00:00.136) 0:18:21.330 ********** 2026-04-17 06:13:23.727277 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-04-17 06:13:23.727288 | orchestrator | 2026-04-17 06:13:23.727299 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-17 06:13:23.727309 | orchestrator | Friday 17 April 2026 06:13:18 +0000 (0:00:00.213) 0:18:21.543 ********** 2026-04-17 06:13:23.727320 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 06:13:23.727331 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-17 06:13:23.727342 | orchestrator | 2026-04-17 06:13:23.727353 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-17 06:13:23.727364 | orchestrator | Friday 17 April 2026 06:13:20 +0000 (0:00:01.254) 0:18:22.798 ********** 2026-04-17 06:13:23.727374 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:13:23.727385 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 06:13:23.727396 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:13:23.727414 | orchestrator | 2026-04-17 06:13:23.727425 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:13:23.727436 | orchestrator | Friday 17 April 2026 06:13:22 +0000 (0:00:02.193) 0:18:24.992 ********** 2026-04-17 06:13:23.727447 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-17 06:13:23.727457 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 06:13:23.727468 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:13:23.727479 | orchestrator | 2026-04-17 06:13:23.727490 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-17 06:13:23.727501 | orchestrator | Friday 17 April 2026 06:13:23 +0000 (0:00:00.960) 0:18:25.952 ********** 2026-04-17 06:13:23.727511 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.727522 | orchestrator | 2026-04-17 06:13:23.727533 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-17 06:13:23.727543 | orchestrator | Friday 17 April 2026 06:13:23 +0000 (0:00:00.240) 0:18:26.193 ********** 2026-04-17 06:13:23.727554 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.727565 | orchestrator | 2026-04-17 06:13:23.727576 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-17 06:13:23.727587 | orchestrator | Friday 17 April 2026 06:13:23 +0000 (0:00:00.143) 0:18:26.336 ********** 2026-04-17 06:13:23.727597 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:13:23.727608 | orchestrator | 2026-04-17 06:13:23.727625 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-17 06:14:00.470444 | orchestrator | Friday 17 April 2026 06:13:23 +0000 (0:00:00.125) 0:18:26.462 ********** 2026-04-17 06:14:00.470563 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-04-17 06:14:00.470580 | orchestrator | 2026-04-17 06:14:00.470593 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-17 06:14:00.470604 | orchestrator | Friday 17 April 2026 06:13:23 +0000 (0:00:00.207) 0:18:26.669 ********** 2026-04-17 06:14:00.470615 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:14:00.470642 | orchestrator | 2026-04-17 06:14:00.470654 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-17 06:14:00.470665 | orchestrator | Friday 17 April 2026 06:13:24 +0000 (0:00:00.490) 0:18:27.160 ********** 2026-04-17 06:14:00.470687 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:14:00.470698 | orchestrator | 2026-04-17 06:14:00.470709 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-17 06:14:00.470720 | orchestrator | Friday 17 April 2026 06:13:26 +0000 (0:00:02.360) 0:18:29.520 ********** 2026-04-17 06:14:00.470730 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-04-17 06:14:00.470741 | orchestrator | 2026-04-17 06:14:00.470752 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-17 06:14:00.470763 | orchestrator | Friday 17 April 2026 06:13:27 +0000 (0:00:00.233) 0:18:29.753 ********** 2026-04-17 06:14:00.470773 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:14:00.470784 | orchestrator | 2026-04-17 06:14:00.470795 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-17 06:14:00.470805 | orchestrator | Friday 17 April 2026 06:13:27 +0000 (0:00:00.938) 0:18:30.692 ********** 2026-04-17 06:14:00.470816 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:14:00.470827 | orchestrator | 2026-04-17 06:14:00.470837 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-17 06:14:00.470848 | orchestrator | Friday 17 April 2026 06:13:29 +0000 (0:00:01.308) 0:18:32.001 ********** 2026-04-17 06:14:00.470859 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:14:00.470869 | orchestrator | 2026-04-17 06:14:00.470880 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-17 06:14:00.470891 | orchestrator | Friday 17 April 2026 06:13:30 +0000 (0:00:01.193) 0:18:33.195 ********** 2026-04-17 06:14:00.470902 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.470913 | orchestrator | 2026-04-17 06:14:00.470962 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-17 06:14:00.470974 | orchestrator | Friday 17 April 2026 06:13:30 +0000 (0:00:00.163) 0:18:33.358 ********** 2026-04-17 06:14:00.470987 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471001 | orchestrator | 2026-04-17 06:14:00.471014 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-17 06:14:00.471027 | orchestrator | Friday 17 April 2026 06:13:30 +0000 (0:00:00.147) 0:18:33.506 ********** 2026-04-17 06:14:00.471039 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-17 06:14:00.471052 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-17 06:14:00.471064 | orchestrator | 2026-04-17 06:14:00.471077 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-17 06:14:00.471090 | orchestrator | Friday 17 April 2026 06:13:31 +0000 (0:00:00.840) 0:18:34.347 ********** 2026-04-17 06:14:00.471102 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-17 06:14:00.471114 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-17 06:14:00.471127 | orchestrator | 2026-04-17 06:14:00.471139 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-17 06:14:00.471151 | orchestrator | Friday 17 April 2026 06:13:33 +0000 (0:00:02.018) 0:18:36.365 ********** 2026-04-17 06:14:00.471164 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-17 06:14:00.471176 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-17 06:14:00.471209 | orchestrator | 2026-04-17 06:14:00.471222 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-17 06:14:00.471235 | orchestrator | Friday 17 April 2026 06:13:37 +0000 (0:00:03.707) 0:18:40.073 ********** 2026-04-17 06:14:00.471247 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471259 | orchestrator | 2026-04-17 06:14:00.471271 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-17 06:14:00.471285 | orchestrator | Friday 17 April 2026 06:13:37 +0000 (0:00:00.275) 0:18:40.349 ********** 2026-04-17 06:14:00.471298 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471310 | orchestrator | 2026-04-17 06:14:00.471322 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-17 06:14:00.471334 | orchestrator | Friday 17 April 2026 06:13:37 +0000 (0:00:00.267) 0:18:40.617 ********** 2026-04-17 06:14:00.471347 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471357 | orchestrator | 2026-04-17 06:14:00.471368 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-17 06:14:00.471379 | orchestrator | Friday 17 April 2026 06:13:38 +0000 (0:00:00.324) 0:18:40.941 ********** 2026-04-17 06:14:00.471390 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471400 | orchestrator | 2026-04-17 06:14:00.471412 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-17 06:14:00.471423 | orchestrator | Friday 17 April 2026 06:13:38 +0000 (0:00:00.148) 0:18:41.090 ********** 2026-04-17 06:14:00.471433 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471444 | orchestrator | 2026-04-17 06:14:00.471455 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-17 06:14:00.471465 | orchestrator | Friday 17 April 2026 06:13:38 +0000 (0:00:00.146) 0:18:41.236 ********** 2026-04-17 06:14:00.471476 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-17 06:14:00.471488 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-17 06:14:00.471499 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-17 06:14:00.471527 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-17 06:14:00.471539 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-04-17 06:14:00.471550 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:14:00.471569 | orchestrator | 2026-04-17 06:14:00.471580 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 06:14:00.471591 | orchestrator | Friday 17 April 2026 06:13:55 +0000 (0:00:17.164) 0:18:58.401 ********** 2026-04-17 06:14:00.471602 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471613 | orchestrator | 2026-04-17 06:14:00.471623 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 06:14:00.471634 | orchestrator | Friday 17 April 2026 06:13:55 +0000 (0:00:00.140) 0:18:58.542 ********** 2026-04-17 06:14:00.471644 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471655 | orchestrator | 2026-04-17 06:14:00.471666 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 06:14:00.471676 | orchestrator | Friday 17 April 2026 06:13:55 +0000 (0:00:00.135) 0:18:58.677 ********** 2026-04-17 06:14:00.471687 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471697 | orchestrator | 2026-04-17 06:14:00.471708 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 06:14:00.471718 | orchestrator | Friday 17 April 2026 06:13:56 +0000 (0:00:00.127) 0:18:58.804 ********** 2026-04-17 06:14:00.471729 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471740 | orchestrator | 2026-04-17 06:14:00.471750 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 06:14:00.471761 | orchestrator | Friday 17 April 2026 06:13:56 +0000 (0:00:00.144) 0:18:58.949 ********** 2026-04-17 06:14:00.471771 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471782 | orchestrator | 2026-04-17 06:14:00.471792 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-17 06:14:00.471803 | orchestrator | Friday 17 April 2026 06:13:56 +0000 (0:00:00.158) 0:18:59.108 ********** 2026-04-17 06:14:00.471814 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471824 | orchestrator | 2026-04-17 06:14:00.471835 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 06:14:00.471852 | orchestrator | Friday 17 April 2026 06:13:56 +0000 (0:00:00.155) 0:18:59.263 ********** 2026-04-17 06:14:00.471862 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:14:00.471873 | orchestrator | 2026-04-17 06:14:00.471884 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-17 06:14:00.471895 | orchestrator | 2026-04-17 06:14:00.471905 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:14:00.471916 | orchestrator | Friday 17 April 2026 06:13:57 +0000 (0:00:00.634) 0:18:59.897 ********** 2026-04-17 06:14:00.471927 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-17 06:14:00.471937 | orchestrator | 2026-04-17 06:14:00.471948 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:14:00.471958 | orchestrator | Friday 17 April 2026 06:13:57 +0000 (0:00:00.248) 0:19:00.145 ********** 2026-04-17 06:14:00.471969 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:00.471979 | orchestrator | 2026-04-17 06:14:00.471990 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:14:00.472001 | orchestrator | Friday 17 April 2026 06:13:57 +0000 (0:00:00.453) 0:19:00.599 ********** 2026-04-17 06:14:00.472011 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:00.472022 | orchestrator | 2026-04-17 06:14:00.472032 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:14:00.472043 | orchestrator | Friday 17 April 2026 06:13:58 +0000 (0:00:00.549) 0:19:01.148 ********** 2026-04-17 06:14:00.472054 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:00.472064 | orchestrator | 2026-04-17 06:14:00.472075 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:14:00.472086 | orchestrator | Friday 17 April 2026 06:13:58 +0000 (0:00:00.448) 0:19:01.597 ********** 2026-04-17 06:14:00.472097 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:00.472107 | orchestrator | 2026-04-17 06:14:00.472118 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:14:00.472135 | orchestrator | Friday 17 April 2026 06:13:59 +0000 (0:00:00.162) 0:19:01.760 ********** 2026-04-17 06:14:00.472146 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:00.472156 | orchestrator | 2026-04-17 06:14:00.472167 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:14:00.472178 | orchestrator | Friday 17 April 2026 06:13:59 +0000 (0:00:00.169) 0:19:01.929 ********** 2026-04-17 06:14:00.472208 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:00.472219 | orchestrator | 2026-04-17 06:14:00.472230 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:14:00.472241 | orchestrator | Friday 17 April 2026 06:13:59 +0000 (0:00:00.181) 0:19:02.111 ********** 2026-04-17 06:14:00.472251 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:00.472262 | orchestrator | 2026-04-17 06:14:00.472273 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:14:00.472284 | orchestrator | Friday 17 April 2026 06:13:59 +0000 (0:00:00.177) 0:19:02.288 ********** 2026-04-17 06:14:00.472294 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:00.472305 | orchestrator | 2026-04-17 06:14:00.472316 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:14:00.472326 | orchestrator | Friday 17 April 2026 06:13:59 +0000 (0:00:00.186) 0:19:02.475 ********** 2026-04-17 06:14:00.472337 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:14:00.472348 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:14:00.472359 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:14:00.472370 | orchestrator | 2026-04-17 06:14:00.472381 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:14:00.472398 | orchestrator | Friday 17 April 2026 06:14:00 +0000 (0:00:00.730) 0:19:03.205 ********** 2026-04-17 06:14:08.593439 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:08.593551 | orchestrator | 2026-04-17 06:14:08.593569 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:14:08.593583 | orchestrator | Friday 17 April 2026 06:14:00 +0000 (0:00:00.285) 0:19:03.491 ********** 2026-04-17 06:14:08.593594 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:14:08.593606 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:14:08.593617 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:14:08.593628 | orchestrator | 2026-04-17 06:14:08.593638 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:14:08.593649 | orchestrator | Friday 17 April 2026 06:14:03 +0000 (0:00:02.276) 0:19:05.767 ********** 2026-04-17 06:14:08.593660 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 06:14:08.593672 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 06:14:08.593682 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 06:14:08.593693 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.593704 | orchestrator | 2026-04-17 06:14:08.593715 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:14:08.593725 | orchestrator | Friday 17 April 2026 06:14:03 +0000 (0:00:00.474) 0:19:06.241 ********** 2026-04-17 06:14:08.593738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:14:08.593752 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:14:08.593778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:14:08.593810 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.593822 | orchestrator | 2026-04-17 06:14:08.593832 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:14:08.593843 | orchestrator | Friday 17 April 2026 06:14:04 +0000 (0:00:01.120) 0:19:07.362 ********** 2026-04-17 06:14:08.593856 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:08.593870 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:08.593881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:08.593892 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.593903 | orchestrator | 2026-04-17 06:14:08.593914 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:14:08.593925 | orchestrator | Friday 17 April 2026 06:14:04 +0000 (0:00:00.177) 0:19:07.539 ********** 2026-04-17 06:14:08.593955 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:14:01.279760', 'end': '2026-04-17 06:14:01.326508', 'delta': '0:00:00.046748', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:14:08.593971 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:14:01.854000', 'end': '2026-04-17 06:14:01.890899', 'delta': '0:00:00.036899', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:14:08.593991 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:14:02.825379', 'end': '2026-04-17 06:14:02.868989', 'delta': '0:00:00.043610', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:14:08.594014 | orchestrator | 2026-04-17 06:14:08.594122 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:14:08.594135 | orchestrator | Friday 17 April 2026 06:14:05 +0000 (0:00:00.606) 0:19:08.146 ********** 2026-04-17 06:14:08.594148 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:08.594161 | orchestrator | 2026-04-17 06:14:08.594174 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:14:08.594186 | orchestrator | Friday 17 April 2026 06:14:05 +0000 (0:00:00.274) 0:19:08.421 ********** 2026-04-17 06:14:08.594219 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.594232 | orchestrator | 2026-04-17 06:14:08.594244 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:14:08.594257 | orchestrator | Friday 17 April 2026 06:14:05 +0000 (0:00:00.270) 0:19:08.692 ********** 2026-04-17 06:14:08.594268 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:08.594280 | orchestrator | 2026-04-17 06:14:08.594293 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:14:08.594306 | orchestrator | Friday 17 April 2026 06:14:06 +0000 (0:00:00.186) 0:19:08.879 ********** 2026-04-17 06:14:08.594318 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:14:08.594331 | orchestrator | 2026-04-17 06:14:08.594342 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:14:08.594353 | orchestrator | Friday 17 April 2026 06:14:07 +0000 (0:00:01.022) 0:19:09.902 ********** 2026-04-17 06:14:08.594363 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:08.594374 | orchestrator | 2026-04-17 06:14:08.594384 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:14:08.594395 | orchestrator | Friday 17 April 2026 06:14:07 +0000 (0:00:00.162) 0:19:10.064 ********** 2026-04-17 06:14:08.594405 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.594416 | orchestrator | 2026-04-17 06:14:08.594427 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:14:08.594437 | orchestrator | Friday 17 April 2026 06:14:07 +0000 (0:00:00.158) 0:19:10.223 ********** 2026-04-17 06:14:08.594448 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.594458 | orchestrator | 2026-04-17 06:14:08.594469 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:14:08.594479 | orchestrator | Friday 17 April 2026 06:14:07 +0000 (0:00:00.221) 0:19:10.444 ********** 2026-04-17 06:14:08.594490 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.594500 | orchestrator | 2026-04-17 06:14:08.594511 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:14:08.594522 | orchestrator | Friday 17 April 2026 06:14:07 +0000 (0:00:00.188) 0:19:10.632 ********** 2026-04-17 06:14:08.594532 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.594542 | orchestrator | 2026-04-17 06:14:08.594553 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:14:08.594564 | orchestrator | Friday 17 April 2026 06:14:08 +0000 (0:00:00.155) 0:19:10.788 ********** 2026-04-17 06:14:08.594574 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:08.594585 | orchestrator | 2026-04-17 06:14:08.594595 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:14:08.594606 | orchestrator | Friday 17 April 2026 06:14:08 +0000 (0:00:00.207) 0:19:10.996 ********** 2026-04-17 06:14:08.594617 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:08.594627 | orchestrator | 2026-04-17 06:14:08.594638 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:14:08.594657 | orchestrator | Friday 17 April 2026 06:14:08 +0000 (0:00:00.131) 0:19:11.128 ********** 2026-04-17 06:14:08.594669 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:08.594679 | orchestrator | 2026-04-17 06:14:08.594690 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:14:08.594710 | orchestrator | Friday 17 April 2026 06:14:08 +0000 (0:00:00.202) 0:19:11.330 ********** 2026-04-17 06:14:09.657914 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:09.658014 | orchestrator | 2026-04-17 06:14:09.658085 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:14:09.658100 | orchestrator | Friday 17 April 2026 06:14:08 +0000 (0:00:00.138) 0:19:11.468 ********** 2026-04-17 06:14:09.658111 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:09.658123 | orchestrator | 2026-04-17 06:14:09.658134 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:14:09.658145 | orchestrator | Friday 17 April 2026 06:14:09 +0000 (0:00:00.614) 0:19:12.082 ********** 2026-04-17 06:14:09.658158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:14:09.658193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'uuids': ['7145b7e9-237d-4eff-af62-82cfb643a183'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS']}})  2026-04-17 06:14:09.658238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ab95973', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:14:09.658253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f']}})  2026-04-17 06:14:09.658266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:14:09.658299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:14:09.658330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:14:09.658343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:14:09.658355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ', 'dm-uuid-CRYPT-LUKS2-9b48552cb2fb461da2ba0698b00ea049-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:14:09.658372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:14:09.658384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'uuids': ['9b48552c-b2fb-461d-a2ba-0698b00ea049'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ']}})  2026-04-17 06:14:09.658396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d']}})  2026-04-17 06:14:09.658414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:14:09.658448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b9d69c97', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:14:10.031325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:14:10.031431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:14:10.031447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS', 'dm-uuid-CRYPT-LUKS2-7145b7e9237d4effaf6282cfb643a183-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:14:10.031485 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:10.031498 | orchestrator | 2026-04-17 06:14:10.031510 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:14:10.031522 | orchestrator | Friday 17 April 2026 06:14:09 +0000 (0:00:00.471) 0:19:12.554 ********** 2026-04-17 06:14:10.031534 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:10.031547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'uuids': ['7145b7e9-237d-4eff-af62-82cfb643a183'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:10.031574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ab95973', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:10.031605 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:10.031620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:10.031640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:10.031652 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:10.031664 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:10.031687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ', 'dm-uuid-CRYPT-LUKS2-9b48552cb2fb461da2ba0698b00ea049-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'uuids': ['9b48552c-b2fb-461d-a2ba-0698b00ea049'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390802 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b9d69c97', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390824 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390836 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390846 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS', 'dm-uuid-CRYPT-LUKS2-7145b7e9237d4effaf6282cfb643a183-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:14:11.390857 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:11.390870 | orchestrator | 2026-04-17 06:14:11.390881 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:14:11.390892 | orchestrator | Friday 17 April 2026 06:14:10 +0000 (0:00:00.471) 0:19:13.026 ********** 2026-04-17 06:14:11.390940 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:11.390953 | orchestrator | 2026-04-17 06:14:11.390964 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:14:11.390979 | orchestrator | Friday 17 April 2026 06:14:10 +0000 (0:00:00.484) 0:19:13.511 ********** 2026-04-17 06:14:11.390989 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:11.390999 | orchestrator | 2026-04-17 06:14:11.391008 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:14:11.391018 | orchestrator | Friday 17 April 2026 06:14:10 +0000 (0:00:00.124) 0:19:13.636 ********** 2026-04-17 06:14:11.391027 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:11.391037 | orchestrator | 2026-04-17 06:14:11.391047 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:14:11.391064 | orchestrator | Friday 17 April 2026 06:14:11 +0000 (0:00:00.492) 0:19:14.129 ********** 2026-04-17 06:14:27.226314 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.226440 | orchestrator | 2026-04-17 06:14:27.226458 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:14:27.226472 | orchestrator | Friday 17 April 2026 06:14:11 +0000 (0:00:00.165) 0:19:14.294 ********** 2026-04-17 06:14:27.226483 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.226494 | orchestrator | 2026-04-17 06:14:27.226506 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:14:27.226517 | orchestrator | Friday 17 April 2026 06:14:11 +0000 (0:00:00.301) 0:19:14.596 ********** 2026-04-17 06:14:27.226528 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.226539 | orchestrator | 2026-04-17 06:14:27.226550 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:14:27.226560 | orchestrator | Friday 17 April 2026 06:14:12 +0000 (0:00:00.161) 0:19:14.758 ********** 2026-04-17 06:14:27.226572 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-17 06:14:27.226583 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-17 06:14:27.226594 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-17 06:14:27.226605 | orchestrator | 2026-04-17 06:14:27.226616 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:14:27.226627 | orchestrator | Friday 17 April 2026 06:14:13 +0000 (0:00:01.057) 0:19:15.815 ********** 2026-04-17 06:14:27.226638 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 06:14:27.226649 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 06:14:27.226660 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 06:14:27.226670 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.226681 | orchestrator | 2026-04-17 06:14:27.226692 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:14:27.226703 | orchestrator | Friday 17 April 2026 06:14:13 +0000 (0:00:00.171) 0:19:15.987 ********** 2026-04-17 06:14:27.226713 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-17 06:14:27.226725 | orchestrator | 2026-04-17 06:14:27.226737 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:14:27.226749 | orchestrator | Friday 17 April 2026 06:14:13 +0000 (0:00:00.224) 0:19:16.212 ********** 2026-04-17 06:14:27.226787 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.226799 | orchestrator | 2026-04-17 06:14:27.226812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:14:27.226824 | orchestrator | Friday 17 April 2026 06:14:13 +0000 (0:00:00.526) 0:19:16.738 ********** 2026-04-17 06:14:27.226836 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.226849 | orchestrator | 2026-04-17 06:14:27.226861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:14:27.226873 | orchestrator | Friday 17 April 2026 06:14:14 +0000 (0:00:00.159) 0:19:16.898 ********** 2026-04-17 06:14:27.226886 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.226898 | orchestrator | 2026-04-17 06:14:27.226910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:14:27.226923 | orchestrator | Friday 17 April 2026 06:14:14 +0000 (0:00:00.195) 0:19:17.094 ********** 2026-04-17 06:14:27.226935 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:27.226948 | orchestrator | 2026-04-17 06:14:27.226960 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:14:27.226973 | orchestrator | Friday 17 April 2026 06:14:14 +0000 (0:00:00.276) 0:19:17.371 ********** 2026-04-17 06:14:27.226985 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:14:27.226998 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:14:27.227010 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:14:27.227023 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.227061 | orchestrator | 2026-04-17 06:14:27.227073 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:14:27.227086 | orchestrator | Friday 17 April 2026 06:14:15 +0000 (0:00:00.423) 0:19:17.795 ********** 2026-04-17 06:14:27.227099 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:14:27.227111 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:14:27.227124 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:14:27.227136 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.227148 | orchestrator | 2026-04-17 06:14:27.227161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:14:27.227173 | orchestrator | Friday 17 April 2026 06:14:15 +0000 (0:00:00.422) 0:19:18.217 ********** 2026-04-17 06:14:27.227183 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:14:27.227194 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:14:27.227205 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:14:27.227216 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.227226 | orchestrator | 2026-04-17 06:14:27.227237 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:14:27.227268 | orchestrator | Friday 17 April 2026 06:14:15 +0000 (0:00:00.452) 0:19:18.669 ********** 2026-04-17 06:14:27.227279 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:27.227289 | orchestrator | 2026-04-17 06:14:27.227315 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:14:27.227326 | orchestrator | Friday 17 April 2026 06:14:16 +0000 (0:00:00.171) 0:19:18.841 ********** 2026-04-17 06:14:27.227337 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 06:14:27.227348 | orchestrator | 2026-04-17 06:14:27.227359 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:14:27.227369 | orchestrator | Friday 17 April 2026 06:14:16 +0000 (0:00:00.394) 0:19:19.236 ********** 2026-04-17 06:14:27.227398 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:14:27.227410 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:14:27.227421 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:14:27.227431 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:14:27.227442 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:14:27.227453 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-17 06:14:27.227464 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:14:27.227475 | orchestrator | 2026-04-17 06:14:27.227485 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:14:27.227496 | orchestrator | Friday 17 April 2026 06:14:17 +0000 (0:00:01.316) 0:19:20.552 ********** 2026-04-17 06:14:27.227507 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:14:27.227517 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:14:27.227528 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:14:27.227539 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:14:27.227549 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:14:27.227560 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-17 06:14:27.227570 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:14:27.227581 | orchestrator | 2026-04-17 06:14:27.227592 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-17 06:14:27.227602 | orchestrator | Friday 17 April 2026 06:14:19 +0000 (0:00:01.880) 0:19:22.433 ********** 2026-04-17 06:14:27.227622 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:27.227633 | orchestrator | 2026-04-17 06:14:27.227643 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-17 06:14:27.227654 | orchestrator | Friday 17 April 2026 06:14:20 +0000 (0:00:00.484) 0:19:22.918 ********** 2026-04-17 06:14:27.227665 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:27.227675 | orchestrator | 2026-04-17 06:14:27.227686 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-17 06:14:27.227697 | orchestrator | Friday 17 April 2026 06:14:20 +0000 (0:00:00.608) 0:19:23.527 ********** 2026-04-17 06:14:27.227708 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:27.227718 | orchestrator | 2026-04-17 06:14:27.227729 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-17 06:14:27.227740 | orchestrator | Friday 17 April 2026 06:14:21 +0000 (0:00:00.259) 0:19:23.786 ********** 2026-04-17 06:14:27.227751 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-17 06:14:27.227762 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-17 06:14:27.227772 | orchestrator | 2026-04-17 06:14:27.227783 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:14:27.227794 | orchestrator | Friday 17 April 2026 06:14:24 +0000 (0:00:03.096) 0:19:26.882 ********** 2026-04-17 06:14:27.227805 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-17 06:14:27.227816 | orchestrator | 2026-04-17 06:14:27.227827 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:14:27.227838 | orchestrator | Friday 17 April 2026 06:14:24 +0000 (0:00:00.211) 0:19:27.094 ********** 2026-04-17 06:14:27.227848 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-17 06:14:27.227859 | orchestrator | 2026-04-17 06:14:27.227870 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:14:27.227881 | orchestrator | Friday 17 April 2026 06:14:24 +0000 (0:00:00.234) 0:19:27.328 ********** 2026-04-17 06:14:27.227891 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.227902 | orchestrator | 2026-04-17 06:14:27.227913 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:14:27.227924 | orchestrator | Friday 17 April 2026 06:14:24 +0000 (0:00:00.149) 0:19:27.478 ********** 2026-04-17 06:14:27.227934 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:27.227945 | orchestrator | 2026-04-17 06:14:27.227956 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:14:27.227967 | orchestrator | Friday 17 April 2026 06:14:25 +0000 (0:00:00.568) 0:19:28.047 ********** 2026-04-17 06:14:27.227978 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:27.227988 | orchestrator | 2026-04-17 06:14:27.227999 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:14:27.228010 | orchestrator | Friday 17 April 2026 06:14:25 +0000 (0:00:00.547) 0:19:28.595 ********** 2026-04-17 06:14:27.228021 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:27.228031 | orchestrator | 2026-04-17 06:14:27.228042 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:14:27.228053 | orchestrator | Friday 17 April 2026 06:14:26 +0000 (0:00:00.535) 0:19:29.130 ********** 2026-04-17 06:14:27.228064 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.228074 | orchestrator | 2026-04-17 06:14:27.228091 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:14:27.228102 | orchestrator | Friday 17 April 2026 06:14:26 +0000 (0:00:00.139) 0:19:29.270 ********** 2026-04-17 06:14:27.228113 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.228124 | orchestrator | 2026-04-17 06:14:27.228134 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:14:27.228145 | orchestrator | Friday 17 April 2026 06:14:26 +0000 (0:00:00.161) 0:19:29.431 ********** 2026-04-17 06:14:27.228156 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:27.228173 | orchestrator | 2026-04-17 06:14:27.228190 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:14:39.051469 | orchestrator | Friday 17 April 2026 06:14:27 +0000 (0:00:00.527) 0:19:29.958 ********** 2026-04-17 06:14:39.051589 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.051606 | orchestrator | 2026-04-17 06:14:39.051619 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:14:39.051631 | orchestrator | Friday 17 April 2026 06:14:27 +0000 (0:00:00.559) 0:19:30.518 ********** 2026-04-17 06:14:39.051642 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.051654 | orchestrator | 2026-04-17 06:14:39.051665 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:14:39.051677 | orchestrator | Friday 17 April 2026 06:14:28 +0000 (0:00:00.626) 0:19:31.145 ********** 2026-04-17 06:14:39.051688 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.051700 | orchestrator | 2026-04-17 06:14:39.051711 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:14:39.051721 | orchestrator | Friday 17 April 2026 06:14:28 +0000 (0:00:00.130) 0:19:31.275 ********** 2026-04-17 06:14:39.051732 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.051743 | orchestrator | 2026-04-17 06:14:39.051753 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:14:39.051764 | orchestrator | Friday 17 April 2026 06:14:28 +0000 (0:00:00.135) 0:19:31.411 ********** 2026-04-17 06:14:39.051775 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.051786 | orchestrator | 2026-04-17 06:14:39.051796 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:14:39.051807 | orchestrator | Friday 17 April 2026 06:14:28 +0000 (0:00:00.229) 0:19:31.641 ********** 2026-04-17 06:14:39.051818 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.051828 | orchestrator | 2026-04-17 06:14:39.051839 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:14:39.051850 | orchestrator | Friday 17 April 2026 06:14:29 +0000 (0:00:00.174) 0:19:31.815 ********** 2026-04-17 06:14:39.051861 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.051871 | orchestrator | 2026-04-17 06:14:39.051882 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:14:39.051893 | orchestrator | Friday 17 April 2026 06:14:29 +0000 (0:00:00.187) 0:19:32.002 ********** 2026-04-17 06:14:39.051904 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.051915 | orchestrator | 2026-04-17 06:14:39.051925 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:14:39.051936 | orchestrator | Friday 17 April 2026 06:14:29 +0000 (0:00:00.153) 0:19:32.155 ********** 2026-04-17 06:14:39.051947 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.051958 | orchestrator | 2026-04-17 06:14:39.051969 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:14:39.051979 | orchestrator | Friday 17 April 2026 06:14:29 +0000 (0:00:00.130) 0:19:32.286 ********** 2026-04-17 06:14:39.051990 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052003 | orchestrator | 2026-04-17 06:14:39.052016 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:14:39.052029 | orchestrator | Friday 17 April 2026 06:14:29 +0000 (0:00:00.149) 0:19:32.436 ********** 2026-04-17 06:14:39.052043 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.052056 | orchestrator | 2026-04-17 06:14:39.052069 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:14:39.052082 | orchestrator | Friday 17 April 2026 06:14:29 +0000 (0:00:00.158) 0:19:32.594 ********** 2026-04-17 06:14:39.052095 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.052107 | orchestrator | 2026-04-17 06:14:39.052118 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:14:39.052129 | orchestrator | Friday 17 April 2026 06:14:30 +0000 (0:00:00.734) 0:19:33.329 ********** 2026-04-17 06:14:39.052140 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052172 | orchestrator | 2026-04-17 06:14:39.052184 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:14:39.052195 | orchestrator | Friday 17 April 2026 06:14:30 +0000 (0:00:00.140) 0:19:33.470 ********** 2026-04-17 06:14:39.052205 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052216 | orchestrator | 2026-04-17 06:14:39.052227 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:14:39.052237 | orchestrator | Friday 17 April 2026 06:14:30 +0000 (0:00:00.154) 0:19:33.625 ********** 2026-04-17 06:14:39.052248 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052259 | orchestrator | 2026-04-17 06:14:39.052293 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:14:39.052304 | orchestrator | Friday 17 April 2026 06:14:31 +0000 (0:00:00.152) 0:19:33.778 ********** 2026-04-17 06:14:39.052314 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052325 | orchestrator | 2026-04-17 06:14:39.052336 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:14:39.052346 | orchestrator | Friday 17 April 2026 06:14:31 +0000 (0:00:00.186) 0:19:33.964 ********** 2026-04-17 06:14:39.052357 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052368 | orchestrator | 2026-04-17 06:14:39.052378 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:14:39.052389 | orchestrator | Friday 17 April 2026 06:14:31 +0000 (0:00:00.145) 0:19:34.110 ********** 2026-04-17 06:14:39.052400 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052410 | orchestrator | 2026-04-17 06:14:39.052421 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:14:39.052431 | orchestrator | Friday 17 April 2026 06:14:31 +0000 (0:00:00.140) 0:19:34.250 ********** 2026-04-17 06:14:39.052457 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052468 | orchestrator | 2026-04-17 06:14:39.052479 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:14:39.052491 | orchestrator | Friday 17 April 2026 06:14:31 +0000 (0:00:00.139) 0:19:34.390 ********** 2026-04-17 06:14:39.052501 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052512 | orchestrator | 2026-04-17 06:14:39.052522 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:14:39.052533 | orchestrator | Friday 17 April 2026 06:14:31 +0000 (0:00:00.133) 0:19:34.523 ********** 2026-04-17 06:14:39.052561 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052573 | orchestrator | 2026-04-17 06:14:39.052583 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:14:39.052594 | orchestrator | Friday 17 April 2026 06:14:31 +0000 (0:00:00.136) 0:19:34.660 ********** 2026-04-17 06:14:39.052605 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052616 | orchestrator | 2026-04-17 06:14:39.052626 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:14:39.052637 | orchestrator | Friday 17 April 2026 06:14:32 +0000 (0:00:00.137) 0:19:34.798 ********** 2026-04-17 06:14:39.052648 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052658 | orchestrator | 2026-04-17 06:14:39.052669 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:14:39.052680 | orchestrator | Friday 17 April 2026 06:14:32 +0000 (0:00:00.131) 0:19:34.929 ********** 2026-04-17 06:14:39.052691 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052702 | orchestrator | 2026-04-17 06:14:39.052712 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:14:39.052723 | orchestrator | Friday 17 April 2026 06:14:32 +0000 (0:00:00.215) 0:19:35.145 ********** 2026-04-17 06:14:39.052734 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.052744 | orchestrator | 2026-04-17 06:14:39.052755 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:14:39.052766 | orchestrator | Friday 17 April 2026 06:14:33 +0000 (0:00:01.360) 0:19:36.505 ********** 2026-04-17 06:14:39.052785 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.052796 | orchestrator | 2026-04-17 06:14:39.052806 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:14:39.052817 | orchestrator | Friday 17 April 2026 06:14:35 +0000 (0:00:01.239) 0:19:37.745 ********** 2026-04-17 06:14:39.052828 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-17 06:14:39.052840 | orchestrator | 2026-04-17 06:14:39.052850 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:14:39.052861 | orchestrator | Friday 17 April 2026 06:14:35 +0000 (0:00:00.243) 0:19:37.989 ********** 2026-04-17 06:14:39.052872 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052883 | orchestrator | 2026-04-17 06:14:39.052893 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:14:39.052904 | orchestrator | Friday 17 April 2026 06:14:35 +0000 (0:00:00.160) 0:19:38.150 ********** 2026-04-17 06:14:39.052915 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.052925 | orchestrator | 2026-04-17 06:14:39.052936 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:14:39.052947 | orchestrator | Friday 17 April 2026 06:14:35 +0000 (0:00:00.148) 0:19:38.298 ********** 2026-04-17 06:14:39.052957 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:14:39.052968 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:14:39.052979 | orchestrator | 2026-04-17 06:14:39.052989 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:14:39.053000 | orchestrator | Friday 17 April 2026 06:14:36 +0000 (0:00:00.809) 0:19:39.108 ********** 2026-04-17 06:14:39.053011 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.053022 | orchestrator | 2026-04-17 06:14:39.053032 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:14:39.053043 | orchestrator | Friday 17 April 2026 06:14:36 +0000 (0:00:00.461) 0:19:39.569 ********** 2026-04-17 06:14:39.053054 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.053065 | orchestrator | 2026-04-17 06:14:39.053075 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:14:39.053086 | orchestrator | Friday 17 April 2026 06:14:36 +0000 (0:00:00.168) 0:19:39.738 ********** 2026-04-17 06:14:39.053096 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.053107 | orchestrator | 2026-04-17 06:14:39.053118 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:14:39.053128 | orchestrator | Friday 17 April 2026 06:14:37 +0000 (0:00:00.160) 0:19:39.898 ********** 2026-04-17 06:14:39.053139 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.053150 | orchestrator | 2026-04-17 06:14:39.053160 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:14:39.053171 | orchestrator | Friday 17 April 2026 06:14:37 +0000 (0:00:00.162) 0:19:40.061 ********** 2026-04-17 06:14:39.053182 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-17 06:14:39.053192 | orchestrator | 2026-04-17 06:14:39.053203 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:14:39.053214 | orchestrator | Friday 17 April 2026 06:14:37 +0000 (0:00:00.244) 0:19:40.306 ********** 2026-04-17 06:14:39.053224 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:39.053235 | orchestrator | 2026-04-17 06:14:39.053246 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:14:39.053257 | orchestrator | Friday 17 April 2026 06:14:38 +0000 (0:00:01.062) 0:19:41.368 ********** 2026-04-17 06:14:39.053298 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:14:39.053310 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:14:39.053327 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:14:39.053345 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.053356 | orchestrator | 2026-04-17 06:14:39.053367 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:14:39.053378 | orchestrator | Friday 17 April 2026 06:14:38 +0000 (0:00:00.156) 0:19:41.524 ********** 2026-04-17 06:14:39.053389 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:39.053400 | orchestrator | 2026-04-17 06:14:39.053411 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:14:39.053422 | orchestrator | Friday 17 April 2026 06:14:38 +0000 (0:00:00.162) 0:19:41.687 ********** 2026-04-17 06:14:39.053440 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.548771 | orchestrator | 2026-04-17 06:14:57.548889 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:14:57.548907 | orchestrator | Friday 17 April 2026 06:14:39 +0000 (0:00:00.191) 0:19:41.879 ********** 2026-04-17 06:14:57.548919 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.548931 | orchestrator | 2026-04-17 06:14:57.548942 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:14:57.548953 | orchestrator | Friday 17 April 2026 06:14:39 +0000 (0:00:00.171) 0:19:42.050 ********** 2026-04-17 06:14:57.548964 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.548975 | orchestrator | 2026-04-17 06:14:57.548986 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:14:57.548997 | orchestrator | Friday 17 April 2026 06:14:39 +0000 (0:00:00.160) 0:19:42.211 ********** 2026-04-17 06:14:57.549007 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549018 | orchestrator | 2026-04-17 06:14:57.549028 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:14:57.549039 | orchestrator | Friday 17 April 2026 06:14:39 +0000 (0:00:00.145) 0:19:42.356 ********** 2026-04-17 06:14:57.549050 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:57.549061 | orchestrator | 2026-04-17 06:14:57.549072 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:14:57.549084 | orchestrator | Friday 17 April 2026 06:14:41 +0000 (0:00:01.498) 0:19:43.855 ********** 2026-04-17 06:14:57.549095 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:57.549106 | orchestrator | 2026-04-17 06:14:57.549117 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:14:57.549127 | orchestrator | Friday 17 April 2026 06:14:41 +0000 (0:00:00.153) 0:19:44.009 ********** 2026-04-17 06:14:57.549138 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-17 06:14:57.549148 | orchestrator | 2026-04-17 06:14:57.549159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:14:57.549170 | orchestrator | Friday 17 April 2026 06:14:41 +0000 (0:00:00.242) 0:19:44.252 ********** 2026-04-17 06:14:57.549180 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549190 | orchestrator | 2026-04-17 06:14:57.549201 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:14:57.549212 | orchestrator | Friday 17 April 2026 06:14:41 +0000 (0:00:00.154) 0:19:44.406 ********** 2026-04-17 06:14:57.549222 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549233 | orchestrator | 2026-04-17 06:14:57.549244 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:14:57.549254 | orchestrator | Friday 17 April 2026 06:14:41 +0000 (0:00:00.153) 0:19:44.559 ********** 2026-04-17 06:14:57.549265 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549275 | orchestrator | 2026-04-17 06:14:57.549286 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:14:57.549297 | orchestrator | Friday 17 April 2026 06:14:42 +0000 (0:00:00.589) 0:19:45.149 ********** 2026-04-17 06:14:57.549339 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549351 | orchestrator | 2026-04-17 06:14:57.549364 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:14:57.549377 | orchestrator | Friday 17 April 2026 06:14:42 +0000 (0:00:00.164) 0:19:45.313 ********** 2026-04-17 06:14:57.549414 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549427 | orchestrator | 2026-04-17 06:14:57.549439 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:14:57.549452 | orchestrator | Friday 17 April 2026 06:14:42 +0000 (0:00:00.160) 0:19:45.474 ********** 2026-04-17 06:14:57.549464 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549476 | orchestrator | 2026-04-17 06:14:57.549488 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:14:57.549501 | orchestrator | Friday 17 April 2026 06:14:42 +0000 (0:00:00.155) 0:19:45.629 ********** 2026-04-17 06:14:57.549514 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549527 | orchestrator | 2026-04-17 06:14:57.549539 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:14:57.549551 | orchestrator | Friday 17 April 2026 06:14:43 +0000 (0:00:00.181) 0:19:45.811 ********** 2026-04-17 06:14:57.549563 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.549575 | orchestrator | 2026-04-17 06:14:57.549587 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:14:57.549600 | orchestrator | Friday 17 April 2026 06:14:43 +0000 (0:00:00.164) 0:19:45.976 ********** 2026-04-17 06:14:57.549613 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:57.549624 | orchestrator | 2026-04-17 06:14:57.549635 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:14:57.549645 | orchestrator | Friday 17 April 2026 06:14:43 +0000 (0:00:00.238) 0:19:46.215 ********** 2026-04-17 06:14:57.549656 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-17 06:14:57.549667 | orchestrator | 2026-04-17 06:14:57.549678 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:14:57.549689 | orchestrator | Friday 17 April 2026 06:14:43 +0000 (0:00:00.218) 0:19:46.433 ********** 2026-04-17 06:14:57.549699 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-17 06:14:57.549710 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-17 06:14:57.549737 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-17 06:14:57.549748 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-17 06:14:57.549759 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-17 06:14:57.549769 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-17 06:14:57.549780 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-17 06:14:57.549790 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:14:57.549801 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:14:57.549829 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:14:57.549840 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:14:57.549851 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:14:57.549862 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:14:57.549873 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:14:57.549883 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-17 06:14:57.549894 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-17 06:14:57.549905 | orchestrator | 2026-04-17 06:14:57.549916 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:14:57.549927 | orchestrator | Friday 17 April 2026 06:14:49 +0000 (0:00:05.485) 0:19:51.919 ********** 2026-04-17 06:14:57.549938 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-17 06:14:57.549948 | orchestrator | 2026-04-17 06:14:57.549959 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 06:14:57.549970 | orchestrator | Friday 17 April 2026 06:14:49 +0000 (0:00:00.214) 0:19:52.134 ********** 2026-04-17 06:14:57.549989 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:14:57.550001 | orchestrator | 2026-04-17 06:14:57.550012 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 06:14:57.550072 | orchestrator | Friday 17 April 2026 06:14:50 +0000 (0:00:00.978) 0:19:53.112 ********** 2026-04-17 06:14:57.550083 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:14:57.550094 | orchestrator | 2026-04-17 06:14:57.550105 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:14:57.550115 | orchestrator | Friday 17 April 2026 06:14:51 +0000 (0:00:00.987) 0:19:54.100 ********** 2026-04-17 06:14:57.550126 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550136 | orchestrator | 2026-04-17 06:14:57.550147 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:14:57.550158 | orchestrator | Friday 17 April 2026 06:14:51 +0000 (0:00:00.132) 0:19:54.232 ********** 2026-04-17 06:14:57.550169 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550179 | orchestrator | 2026-04-17 06:14:57.550190 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:14:57.550201 | orchestrator | Friday 17 April 2026 06:14:51 +0000 (0:00:00.151) 0:19:54.384 ********** 2026-04-17 06:14:57.550211 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550222 | orchestrator | 2026-04-17 06:14:57.550232 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:14:57.550243 | orchestrator | Friday 17 April 2026 06:14:51 +0000 (0:00:00.164) 0:19:54.549 ********** 2026-04-17 06:14:57.550254 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550265 | orchestrator | 2026-04-17 06:14:57.550275 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:14:57.550286 | orchestrator | Friday 17 April 2026 06:14:51 +0000 (0:00:00.161) 0:19:54.711 ********** 2026-04-17 06:14:57.550296 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550327 | orchestrator | 2026-04-17 06:14:57.550338 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:14:57.550349 | orchestrator | Friday 17 April 2026 06:14:52 +0000 (0:00:00.171) 0:19:54.882 ********** 2026-04-17 06:14:57.550359 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550370 | orchestrator | 2026-04-17 06:14:57.550380 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:14:57.550391 | orchestrator | Friday 17 April 2026 06:14:52 +0000 (0:00:00.137) 0:19:55.019 ********** 2026-04-17 06:14:57.550407 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550418 | orchestrator | 2026-04-17 06:14:57.550429 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:14:57.550440 | orchestrator | Friday 17 April 2026 06:14:52 +0000 (0:00:00.150) 0:19:55.170 ********** 2026-04-17 06:14:57.550451 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550461 | orchestrator | 2026-04-17 06:14:57.550472 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:14:57.550483 | orchestrator | Friday 17 April 2026 06:14:52 +0000 (0:00:00.147) 0:19:55.318 ********** 2026-04-17 06:14:57.550493 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550504 | orchestrator | 2026-04-17 06:14:57.550515 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:14:57.550526 | orchestrator | Friday 17 April 2026 06:14:52 +0000 (0:00:00.147) 0:19:55.465 ********** 2026-04-17 06:14:57.550537 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:14:57.550547 | orchestrator | 2026-04-17 06:14:57.550558 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:14:57.550569 | orchestrator | Friday 17 April 2026 06:14:52 +0000 (0:00:00.145) 0:19:55.610 ********** 2026-04-17 06:14:57.550587 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:14:57.550598 | orchestrator | 2026-04-17 06:14:57.550615 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:14:57.550626 | orchestrator | Friday 17 April 2026 06:14:53 +0000 (0:00:00.246) 0:19:55.857 ********** 2026-04-17 06:14:57.550637 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:14:57.550648 | orchestrator | 2026-04-17 06:14:57.550658 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:14:57.550669 | orchestrator | Friday 17 April 2026 06:14:57 +0000 (0:00:04.330) 0:20:00.188 ********** 2026-04-17 06:14:57.550688 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:15:18.744576 | orchestrator | 2026-04-17 06:15:18.744658 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:15:18.744665 | orchestrator | Friday 17 April 2026 06:14:57 +0000 (0:00:00.189) 0:20:00.377 ********** 2026-04-17 06:15:18.744671 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-17 06:15:18.744677 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-17 06:15:18.744683 | orchestrator | 2026-04-17 06:15:18.744687 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:15:18.744691 | orchestrator | Friday 17 April 2026 06:15:04 +0000 (0:00:06.798) 0:20:07.175 ********** 2026-04-17 06:15:18.744695 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.744700 | orchestrator | 2026-04-17 06:15:18.744704 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:15:18.744708 | orchestrator | Friday 17 April 2026 06:15:04 +0000 (0:00:00.153) 0:20:07.329 ********** 2026-04-17 06:15:18.744712 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.744715 | orchestrator | 2026-04-17 06:15:18.744720 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:15:18.744726 | orchestrator | Friday 17 April 2026 06:15:04 +0000 (0:00:00.143) 0:20:07.472 ********** 2026-04-17 06:15:18.744730 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.744733 | orchestrator | 2026-04-17 06:15:18.744737 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:15:18.744741 | orchestrator | Friday 17 April 2026 06:15:04 +0000 (0:00:00.178) 0:20:07.651 ********** 2026-04-17 06:15:18.744745 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.744748 | orchestrator | 2026-04-17 06:15:18.744752 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:15:18.744756 | orchestrator | Friday 17 April 2026 06:15:05 +0000 (0:00:00.169) 0:20:07.820 ********** 2026-04-17 06:15:18.744760 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.744764 | orchestrator | 2026-04-17 06:15:18.744767 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:15:18.744771 | orchestrator | Friday 17 April 2026 06:15:05 +0000 (0:00:00.191) 0:20:08.011 ********** 2026-04-17 06:15:18.744775 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:15:18.744779 | orchestrator | 2026-04-17 06:15:18.744783 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:15:18.744787 | orchestrator | Friday 17 April 2026 06:15:05 +0000 (0:00:00.311) 0:20:08.323 ********** 2026-04-17 06:15:18.744790 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:15:18.744808 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:15:18.744812 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:15:18.744816 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.744819 | orchestrator | 2026-04-17 06:15:18.744823 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:15:18.744827 | orchestrator | Friday 17 April 2026 06:15:06 +0000 (0:00:00.446) 0:20:08.769 ********** 2026-04-17 06:15:18.744831 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:15:18.744835 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:15:18.744839 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:15:18.744842 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.744846 | orchestrator | 2026-04-17 06:15:18.744850 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:15:18.744853 | orchestrator | Friday 17 April 2026 06:15:06 +0000 (0:00:00.479) 0:20:09.249 ********** 2026-04-17 06:15:18.744857 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:15:18.744861 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:15:18.744865 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:15:18.744868 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.744872 | orchestrator | 2026-04-17 06:15:18.744876 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:15:18.744879 | orchestrator | Friday 17 April 2026 06:15:06 +0000 (0:00:00.444) 0:20:09.693 ********** 2026-04-17 06:15:18.744883 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:15:18.744887 | orchestrator | 2026-04-17 06:15:18.744890 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:15:18.744903 | orchestrator | Friday 17 April 2026 06:15:07 +0000 (0:00:00.174) 0:20:09.868 ********** 2026-04-17 06:15:18.744907 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 06:15:18.744911 | orchestrator | 2026-04-17 06:15:18.744915 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:15:18.744918 | orchestrator | Friday 17 April 2026 06:15:08 +0000 (0:00:01.529) 0:20:11.398 ********** 2026-04-17 06:15:18.744923 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:15:18.744926 | orchestrator | 2026-04-17 06:15:18.744930 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-17 06:15:18.744934 | orchestrator | Friday 17 April 2026 06:15:09 +0000 (0:00:00.847) 0:20:12.245 ********** 2026-04-17 06:15:18.744938 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:15:18.744942 | orchestrator | 2026-04-17 06:15:18.744955 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-17 06:15:18.744960 | orchestrator | Friday 17 April 2026 06:15:09 +0000 (0:00:00.156) 0:20:12.402 ********** 2026-04-17 06:15:18.744964 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:15:18.744968 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:15:18.744972 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:15:18.744976 | orchestrator | 2026-04-17 06:15:18.744980 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-17 06:15:18.744984 | orchestrator | Friday 17 April 2026 06:15:10 +0000 (0:00:00.772) 0:20:13.175 ********** 2026-04-17 06:15:18.744988 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-04-17 06:15:18.744992 | orchestrator | 2026-04-17 06:15:18.744996 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-17 06:15:18.745000 | orchestrator | Friday 17 April 2026 06:15:10 +0000 (0:00:00.211) 0:20:13.387 ********** 2026-04-17 06:15:18.745004 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.745007 | orchestrator | 2026-04-17 06:15:18.745015 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-17 06:15:18.745019 | orchestrator | Friday 17 April 2026 06:15:10 +0000 (0:00:00.145) 0:20:13.532 ********** 2026-04-17 06:15:18.745022 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.745026 | orchestrator | 2026-04-17 06:15:18.745030 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-17 06:15:18.745034 | orchestrator | Friday 17 April 2026 06:15:10 +0000 (0:00:00.138) 0:20:13.670 ********** 2026-04-17 06:15:18.745038 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:15:18.745042 | orchestrator | 2026-04-17 06:15:18.745046 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-17 06:15:18.745050 | orchestrator | Friday 17 April 2026 06:15:11 +0000 (0:00:00.446) 0:20:14.116 ********** 2026-04-17 06:15:18.745053 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:15:18.745057 | orchestrator | 2026-04-17 06:15:18.745061 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-17 06:15:18.745065 | orchestrator | Friday 17 April 2026 06:15:11 +0000 (0:00:00.193) 0:20:14.310 ********** 2026-04-17 06:15:18.745069 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 06:15:18.745073 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 06:15:18.745077 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 06:15:18.745081 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 06:15:18.745085 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 06:15:18.745088 | orchestrator | 2026-04-17 06:15:18.745092 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-17 06:15:18.745096 | orchestrator | Friday 17 April 2026 06:15:13 +0000 (0:00:01.837) 0:20:16.147 ********** 2026-04-17 06:15:18.745100 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.745104 | orchestrator | 2026-04-17 06:15:18.745108 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-17 06:15:18.745112 | orchestrator | Friday 17 April 2026 06:15:13 +0000 (0:00:00.150) 0:20:16.298 ********** 2026-04-17 06:15:18.745116 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-04-17 06:15:18.745119 | orchestrator | 2026-04-17 06:15:18.745123 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-17 06:15:18.745127 | orchestrator | Friday 17 April 2026 06:15:14 +0000 (0:00:00.637) 0:20:16.935 ********** 2026-04-17 06:15:18.745131 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 06:15:18.745135 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-17 06:15:18.745139 | orchestrator | 2026-04-17 06:15:18.745143 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-17 06:15:18.745147 | orchestrator | Friday 17 April 2026 06:15:15 +0000 (0:00:00.820) 0:20:17.755 ********** 2026-04-17 06:15:18.745152 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:15:18.745156 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 06:15:18.745161 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:15:18.745165 | orchestrator | 2026-04-17 06:15:18.745169 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:15:18.745173 | orchestrator | Friday 17 April 2026 06:15:17 +0000 (0:00:02.158) 0:20:19.914 ********** 2026-04-17 06:15:18.745178 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-17 06:15:18.745182 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 06:15:18.745187 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:15:18.745191 | orchestrator | 2026-04-17 06:15:18.745195 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-17 06:15:18.745202 | orchestrator | Friday 17 April 2026 06:15:18 +0000 (0:00:01.026) 0:20:20.940 ********** 2026-04-17 06:15:18.745211 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.745215 | orchestrator | 2026-04-17 06:15:18.745219 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-17 06:15:18.745224 | orchestrator | Friday 17 April 2026 06:15:18 +0000 (0:00:00.259) 0:20:21.200 ********** 2026-04-17 06:15:18.745228 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.745233 | orchestrator | 2026-04-17 06:15:18.745237 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-17 06:15:18.745241 | orchestrator | Friday 17 April 2026 06:15:18 +0000 (0:00:00.153) 0:20:21.353 ********** 2026-04-17 06:15:18.745246 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:15:18.745250 | orchestrator | 2026-04-17 06:15:18.745256 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-17 06:17:10.924717 | orchestrator | Friday 17 April 2026 06:15:18 +0000 (0:00:00.125) 0:20:21.479 ********** 2026-04-17 06:17:10.924833 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-04-17 06:17:10.924849 | orchestrator | 2026-04-17 06:17:10.924879 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-17 06:17:10.924902 | orchestrator | Friday 17 April 2026 06:15:18 +0000 (0:00:00.236) 0:20:21.716 ********** 2026-04-17 06:17:10.924913 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:17:10.924926 | orchestrator | 2026-04-17 06:17:10.924937 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-17 06:17:10.924948 | orchestrator | Friday 17 April 2026 06:15:19 +0000 (0:00:00.458) 0:20:22.174 ********** 2026-04-17 06:17:10.924959 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:17:10.924970 | orchestrator | 2026-04-17 06:17:10.924981 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-17 06:17:10.924992 | orchestrator | Friday 17 April 2026 06:15:21 +0000 (0:00:02.373) 0:20:24.548 ********** 2026-04-17 06:17:10.925003 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-04-17 06:17:10.925013 | orchestrator | 2026-04-17 06:17:10.925024 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-17 06:17:10.925036 | orchestrator | Friday 17 April 2026 06:15:22 +0000 (0:00:00.639) 0:20:25.187 ********** 2026-04-17 06:17:10.925047 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:17:10.925058 | orchestrator | 2026-04-17 06:17:10.925068 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-17 06:17:10.925079 | orchestrator | Friday 17 April 2026 06:15:23 +0000 (0:00:00.975) 0:20:26.163 ********** 2026-04-17 06:17:10.925090 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:17:10.925100 | orchestrator | 2026-04-17 06:17:10.925111 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-17 06:17:10.925121 | orchestrator | Friday 17 April 2026 06:15:24 +0000 (0:00:00.961) 0:20:27.124 ********** 2026-04-17 06:17:10.925132 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:17:10.925143 | orchestrator | 2026-04-17 06:17:10.925153 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-17 06:17:10.925164 | orchestrator | Friday 17 April 2026 06:15:26 +0000 (0:00:02.213) 0:20:29.338 ********** 2026-04-17 06:17:10.925175 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925187 | orchestrator | 2026-04-17 06:17:10.925198 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-17 06:17:10.925208 | orchestrator | Friday 17 April 2026 06:15:26 +0000 (0:00:00.168) 0:20:29.506 ********** 2026-04-17 06:17:10.925219 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925230 | orchestrator | 2026-04-17 06:17:10.925240 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-17 06:17:10.925251 | orchestrator | Friday 17 April 2026 06:15:26 +0000 (0:00:00.167) 0:20:29.674 ********** 2026-04-17 06:17:10.925264 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-17 06:17:10.925276 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-17 06:17:10.925288 | orchestrator | 2026-04-17 06:17:10.925300 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-17 06:17:10.925337 | orchestrator | Friday 17 April 2026 06:15:27 +0000 (0:00:00.874) 0:20:30.549 ********** 2026-04-17 06:17:10.925349 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-17 06:17:10.925362 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-17 06:17:10.925374 | orchestrator | 2026-04-17 06:17:10.925385 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-17 06:17:10.925398 | orchestrator | Friday 17 April 2026 06:15:29 +0000 (0:00:01.902) 0:20:32.452 ********** 2026-04-17 06:17:10.925411 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-17 06:17:10.925423 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-17 06:17:10.925460 | orchestrator | 2026-04-17 06:17:10.925473 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-17 06:17:10.925485 | orchestrator | Friday 17 April 2026 06:15:33 +0000 (0:00:03.593) 0:20:36.045 ********** 2026-04-17 06:17:10.925497 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925509 | orchestrator | 2026-04-17 06:17:10.925521 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-17 06:17:10.925533 | orchestrator | Friday 17 April 2026 06:15:33 +0000 (0:00:00.251) 0:20:36.297 ********** 2026-04-17 06:17:10.925546 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-17 06:17:10.925558 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:17:10.925571 | orchestrator | 2026-04-17 06:17:10.925583 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-17 06:17:10.925595 | orchestrator | Friday 17 April 2026 06:15:45 +0000 (0:00:12.258) 0:20:48.556 ********** 2026-04-17 06:17:10.925608 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925618 | orchestrator | 2026-04-17 06:17:10.925629 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-17 06:17:10.925640 | orchestrator | Friday 17 April 2026 06:15:46 +0000 (0:00:00.319) 0:20:48.875 ********** 2026-04-17 06:17:10.925650 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925661 | orchestrator | 2026-04-17 06:17:10.925686 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-17 06:17:10.925698 | orchestrator | Friday 17 April 2026 06:15:46 +0000 (0:00:00.586) 0:20:49.462 ********** 2026-04-17 06:17:10.925709 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925720 | orchestrator | 2026-04-17 06:17:10.925730 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-17 06:17:10.925741 | orchestrator | Friday 17 April 2026 06:15:46 +0000 (0:00:00.139) 0:20:49.601 ********** 2026-04-17 06:17:10.925752 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-17 06:17:10.925763 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:17:10.925773 | orchestrator | 2026-04-17 06:17:10.925802 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 06:17:10.925813 | orchestrator | Friday 17 April 2026 06:15:51 +0000 (0:00:04.373) 0:20:53.974 ********** 2026-04-17 06:17:10.925824 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925834 | orchestrator | 2026-04-17 06:17:10.925845 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 06:17:10.925856 | orchestrator | Friday 17 April 2026 06:15:51 +0000 (0:00:00.160) 0:20:54.135 ********** 2026-04-17 06:17:10.925866 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925877 | orchestrator | 2026-04-17 06:17:10.925887 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 06:17:10.925898 | orchestrator | Friday 17 April 2026 06:15:51 +0000 (0:00:00.143) 0:20:54.279 ********** 2026-04-17 06:17:10.925909 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925919 | orchestrator | 2026-04-17 06:17:10.925929 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 06:17:10.925948 | orchestrator | Friday 17 April 2026 06:15:51 +0000 (0:00:00.165) 0:20:54.444 ********** 2026-04-17 06:17:10.925959 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.925970 | orchestrator | 2026-04-17 06:17:10.925981 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 06:17:10.925991 | orchestrator | Friday 17 April 2026 06:15:51 +0000 (0:00:00.147) 0:20:54.592 ********** 2026-04-17 06:17:10.926002 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.926012 | orchestrator | 2026-04-17 06:17:10.926077 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-17 06:17:10.926088 | orchestrator | Friday 17 April 2026 06:15:51 +0000 (0:00:00.133) 0:20:54.725 ********** 2026-04-17 06:17:10.926099 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.926110 | orchestrator | 2026-04-17 06:17:10.926120 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 06:17:10.926131 | orchestrator | Friday 17 April 2026 06:15:52 +0000 (0:00:00.161) 0:20:54.886 ********** 2026-04-17 06:17:10.926142 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:17:10.926153 | orchestrator | 2026-04-17 06:17:10.926163 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-04-17 06:17:10.926174 | orchestrator | 2026-04-17 06:17:10.926185 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:17:10.926195 | orchestrator | Friday 17 April 2026 06:15:53 +0000 (0:00:01.152) 0:20:56.038 ********** 2026-04-17 06:17:10.926206 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:17:10.926217 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:17:10.926227 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:17:10.926238 | orchestrator | 2026-04-17 06:17:10.926249 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:17:10.926259 | orchestrator | Friday 17 April 2026 06:15:54 +0000 (0:00:00.727) 0:20:56.765 ********** 2026-04-17 06:17:10.926270 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:17:10.926281 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:17:10.926291 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:17:10.926312 | orchestrator | 2026-04-17 06:17:10.926330 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-04-17 06:17:10.926349 | orchestrator | Friday 17 April 2026 06:15:54 +0000 (0:00:00.578) 0:20:57.344 ********** 2026-04-17 06:17:10.926366 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-17 06:17:10.926383 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-17 06:17:10.926401 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-17 06:17:10.926419 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-17 06:17:10.926463 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-17 06:17:10.926481 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-17 06:17:10.926499 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-17 06:17:10.926516 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-17 06:17:10.926534 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-17 06:17:10.926553 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-17 06:17:10.926565 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-17 06:17:10.926576 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-17 06:17:10.926586 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-17 06:17:10.926614 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-17 06:17:10.926626 | orchestrator | 2026-04-17 06:17:10.926636 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-04-17 06:17:10.926647 | orchestrator | Friday 17 April 2026 06:17:06 +0000 (0:01:11.613) 0:22:08.957 ********** 2026-04-17 06:17:10.926658 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-17 06:17:10.926668 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-17 06:17:10.926679 | orchestrator | 2026-04-17 06:17:10.926689 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-04-17 06:17:10.926709 | orchestrator | Friday 17 April 2026 06:17:10 +0000 (0:00:04.700) 0:22:13.658 ********** 2026-04-17 06:17:22.154710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:17:22.154828 | orchestrator | 2026-04-17 06:17:22.154846 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-04-17 06:17:22.154859 | orchestrator | 2026-04-17 06:17:22.154870 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:17:22.154882 | orchestrator | Friday 17 April 2026 06:17:13 +0000 (0:00:02.368) 0:22:16.027 ********** 2026-04-17 06:17:22.154892 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-17 06:17:22.154903 | orchestrator | 2026-04-17 06:17:22.154914 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:17:22.154925 | orchestrator | Friday 17 April 2026 06:17:13 +0000 (0:00:00.664) 0:22:16.692 ********** 2026-04-17 06:17:22.154936 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:22.154949 | orchestrator | 2026-04-17 06:17:22.154959 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:17:22.154970 | orchestrator | Friday 17 April 2026 06:17:14 +0000 (0:00:00.479) 0:22:17.171 ********** 2026-04-17 06:17:22.154981 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:22.154992 | orchestrator | 2026-04-17 06:17:22.155002 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:17:22.155013 | orchestrator | Friday 17 April 2026 06:17:14 +0000 (0:00:00.152) 0:22:17.323 ********** 2026-04-17 06:17:22.155024 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:22.155034 | orchestrator | 2026-04-17 06:17:22.155045 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:17:22.155056 | orchestrator | Friday 17 April 2026 06:17:15 +0000 (0:00:00.487) 0:22:17.811 ********** 2026-04-17 06:17:22.155067 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:22.155077 | orchestrator | 2026-04-17 06:17:22.155088 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:17:22.155099 | orchestrator | Friday 17 April 2026 06:17:15 +0000 (0:00:00.175) 0:22:17.986 ********** 2026-04-17 06:17:22.155109 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:22.155120 | orchestrator | 2026-04-17 06:17:22.155131 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:17:22.155141 | orchestrator | Friday 17 April 2026 06:17:15 +0000 (0:00:00.162) 0:22:18.149 ********** 2026-04-17 06:17:22.155152 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:22.155163 | orchestrator | 2026-04-17 06:17:22.155174 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:17:22.155185 | orchestrator | Friday 17 April 2026 06:17:15 +0000 (0:00:00.178) 0:22:18.327 ********** 2026-04-17 06:17:22.155196 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:22.155207 | orchestrator | 2026-04-17 06:17:22.155218 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:17:22.155229 | orchestrator | Friday 17 April 2026 06:17:15 +0000 (0:00:00.168) 0:22:18.495 ********** 2026-04-17 06:17:22.155239 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:22.155250 | orchestrator | 2026-04-17 06:17:22.155261 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:17:22.155294 | orchestrator | Friday 17 April 2026 06:17:15 +0000 (0:00:00.159) 0:22:18.655 ********** 2026-04-17 06:17:22.155306 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:17:22.155317 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:17:22.155328 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:17:22.155339 | orchestrator | 2026-04-17 06:17:22.155349 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:17:22.155360 | orchestrator | Friday 17 April 2026 06:17:16 +0000 (0:00:01.085) 0:22:19.741 ********** 2026-04-17 06:17:22.155371 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:22.155381 | orchestrator | 2026-04-17 06:17:22.155392 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:17:22.155402 | orchestrator | Friday 17 April 2026 06:17:17 +0000 (0:00:00.289) 0:22:20.030 ********** 2026-04-17 06:17:22.155413 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:17:22.155424 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:17:22.155435 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:17:22.155470 | orchestrator | 2026-04-17 06:17:22.155482 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:17:22.155493 | orchestrator | Friday 17 April 2026 06:17:19 +0000 (0:00:02.324) 0:22:22.355 ********** 2026-04-17 06:17:22.155504 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 06:17:22.155516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 06:17:22.155526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 06:17:22.155537 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:22.155548 | orchestrator | 2026-04-17 06:17:22.155559 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:17:22.155570 | orchestrator | Friday 17 April 2026 06:17:20 +0000 (0:00:01.312) 0:22:23.668 ********** 2026-04-17 06:17:22.155595 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:17:22.155609 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:17:22.155638 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:17:22.155650 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:22.155661 | orchestrator | 2026-04-17 06:17:22.155673 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:17:22.155683 | orchestrator | Friday 17 April 2026 06:17:21 +0000 (0:00:00.717) 0:22:24.385 ********** 2026-04-17 06:17:22.155696 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:22.155710 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:22.155734 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:22.155745 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:22.155756 | orchestrator | 2026-04-17 06:17:22.155767 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:17:22.155778 | orchestrator | Friday 17 April 2026 06:17:21 +0000 (0:00:00.218) 0:22:24.603 ********** 2026-04-17 06:17:22.155791 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:17:18.274402', 'end': '2026-04-17 06:17:18.318803', 'delta': '0:00:00.044401', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:17:22.155807 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:17:18.854484', 'end': '2026-04-17 06:17:18.906741', 'delta': '0:00:00.052257', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:17:22.155838 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:17:19.420716', 'end': '2026-04-17 06:17:19.468056', 'delta': '0:00:00.047340', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:17:22.155867 | orchestrator | 2026-04-17 06:17:22.155896 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:17:26.080425 | orchestrator | Friday 17 April 2026 06:17:22 +0000 (0:00:00.283) 0:22:24.887 ********** 2026-04-17 06:17:26.080585 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:26.080610 | orchestrator | 2026-04-17 06:17:26.080627 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:17:26.080645 | orchestrator | Friday 17 April 2026 06:17:22 +0000 (0:00:00.270) 0:22:25.158 ********** 2026-04-17 06:17:26.080661 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.080680 | orchestrator | 2026-04-17 06:17:26.080698 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:17:26.080733 | orchestrator | Friday 17 April 2026 06:17:22 +0000 (0:00:00.253) 0:22:25.411 ********** 2026-04-17 06:17:26.080743 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:26.080753 | orchestrator | 2026-04-17 06:17:26.080764 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:17:26.080773 | orchestrator | Friday 17 April 2026 06:17:22 +0000 (0:00:00.159) 0:22:25.571 ********** 2026-04-17 06:17:26.080782 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:26.080792 | orchestrator | 2026-04-17 06:17:26.080801 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:17:26.080811 | orchestrator | Friday 17 April 2026 06:17:23 +0000 (0:00:01.026) 0:22:26.598 ********** 2026-04-17 06:17:26.080820 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:26.080830 | orchestrator | 2026-04-17 06:17:26.080839 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:17:26.080848 | orchestrator | Friday 17 April 2026 06:17:24 +0000 (0:00:00.163) 0:22:26.761 ********** 2026-04-17 06:17:26.080858 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.080867 | orchestrator | 2026-04-17 06:17:26.080877 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:17:26.080886 | orchestrator | Friday 17 April 2026 06:17:24 +0000 (0:00:00.147) 0:22:26.909 ********** 2026-04-17 06:17:26.080895 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.080904 | orchestrator | 2026-04-17 06:17:26.080914 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:17:26.080923 | orchestrator | Friday 17 April 2026 06:17:24 +0000 (0:00:00.243) 0:22:27.152 ********** 2026-04-17 06:17:26.080932 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.080942 | orchestrator | 2026-04-17 06:17:26.080951 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:17:26.080960 | orchestrator | Friday 17 April 2026 06:17:24 +0000 (0:00:00.135) 0:22:27.287 ********** 2026-04-17 06:17:26.080972 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.080982 | orchestrator | 2026-04-17 06:17:26.080993 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:17:26.081004 | orchestrator | Friday 17 April 2026 06:17:24 +0000 (0:00:00.163) 0:22:27.451 ********** 2026-04-17 06:17:26.081016 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.081027 | orchestrator | 2026-04-17 06:17:26.081038 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:17:26.081049 | orchestrator | Friday 17 April 2026 06:17:25 +0000 (0:00:00.560) 0:22:28.011 ********** 2026-04-17 06:17:26.081060 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.081072 | orchestrator | 2026-04-17 06:17:26.081083 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:17:26.081094 | orchestrator | Friday 17 April 2026 06:17:25 +0000 (0:00:00.147) 0:22:28.158 ********** 2026-04-17 06:17:26.081105 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.081116 | orchestrator | 2026-04-17 06:17:26.081128 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:17:26.081138 | orchestrator | Friday 17 April 2026 06:17:25 +0000 (0:00:00.161) 0:22:28.320 ********** 2026-04-17 06:17:26.081150 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.081161 | orchestrator | 2026-04-17 06:17:26.081172 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:17:26.081184 | orchestrator | Friday 17 April 2026 06:17:25 +0000 (0:00:00.154) 0:22:28.475 ********** 2026-04-17 06:17:26.081195 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.081206 | orchestrator | 2026-04-17 06:17:26.081217 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:17:26.081228 | orchestrator | Friday 17 April 2026 06:17:25 +0000 (0:00:00.151) 0:22:28.626 ********** 2026-04-17 06:17:26.081242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:17:26.081279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:17:26.081309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:17:26.081323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:17:26.081337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:17:26.081348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:17:26.081357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:17:26.081378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1d6df01d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:17:26.081405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:17:26.376947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:17:26.377053 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:26.377070 | orchestrator | 2026-04-17 06:17:26.377100 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:17:26.377124 | orchestrator | Friday 17 April 2026 06:17:26 +0000 (0:00:00.308) 0:22:28.935 ********** 2026-04-17 06:17:26.377138 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377154 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377165 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377204 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377217 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377248 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377260 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377321 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1d6df01d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d6df01d-73bc-4a8f-b4ef-36e98f006fb7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377346 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:26.377366 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:17:55.755305 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:55.755456 | orchestrator | 2026-04-17 06:17:55.755484 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:17:55.755565 | orchestrator | Friday 17 April 2026 06:17:26 +0000 (0:00:00.329) 0:22:29.265 ********** 2026-04-17 06:17:55.755585 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:55.755604 | orchestrator | 2026-04-17 06:17:55.755623 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:17:55.755641 | orchestrator | Friday 17 April 2026 06:17:27 +0000 (0:00:00.530) 0:22:29.796 ********** 2026-04-17 06:17:55.755660 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:55.755678 | orchestrator | 2026-04-17 06:17:55.755696 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:17:55.755714 | orchestrator | Friday 17 April 2026 06:17:27 +0000 (0:00:00.152) 0:22:29.948 ********** 2026-04-17 06:17:55.755732 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:55.755751 | orchestrator | 2026-04-17 06:17:55.755770 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:17:55.755789 | orchestrator | Friday 17 April 2026 06:17:27 +0000 (0:00:00.484) 0:22:30.433 ********** 2026-04-17 06:17:55.755808 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:55.755826 | orchestrator | 2026-04-17 06:17:55.755845 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:17:55.755864 | orchestrator | Friday 17 April 2026 06:17:27 +0000 (0:00:00.140) 0:22:30.573 ********** 2026-04-17 06:17:55.755917 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:55.755936 | orchestrator | 2026-04-17 06:17:55.755954 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:17:55.755973 | orchestrator | Friday 17 April 2026 06:17:28 +0000 (0:00:00.300) 0:22:30.874 ********** 2026-04-17 06:17:55.755991 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:55.756010 | orchestrator | 2026-04-17 06:17:55.756028 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:17:55.756047 | orchestrator | Friday 17 April 2026 06:17:28 +0000 (0:00:00.171) 0:22:31.045 ********** 2026-04-17 06:17:55.756065 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:17:55.756084 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 06:17:55.756103 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 06:17:55.756121 | orchestrator | 2026-04-17 06:17:55.756141 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:17:55.756159 | orchestrator | Friday 17 April 2026 06:17:29 +0000 (0:00:01.690) 0:22:32.736 ********** 2026-04-17 06:17:55.756178 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 06:17:55.756197 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 06:17:55.756216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 06:17:55.756234 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:55.756252 | orchestrator | 2026-04-17 06:17:55.756270 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:17:55.756289 | orchestrator | Friday 17 April 2026 06:17:30 +0000 (0:00:00.221) 0:22:32.957 ********** 2026-04-17 06:17:55.756308 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:17:55.756327 | orchestrator | 2026-04-17 06:17:55.756345 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:17:55.756364 | orchestrator | Friday 17 April 2026 06:17:30 +0000 (0:00:00.165) 0:22:33.123 ********** 2026-04-17 06:17:55.756383 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:17:55.756402 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:17:55.756421 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:17:55.756458 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:17:55.756477 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:17:55.756518 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:17:55.756540 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:17:55.756557 | orchestrator | 2026-04-17 06:17:55.756580 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:17:55.756599 | orchestrator | Friday 17 April 2026 06:17:31 +0000 (0:00:00.943) 0:22:34.067 ********** 2026-04-17 06:17:55.756617 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 06:17:55.756637 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:17:55.756655 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:17:55.756673 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:17:55.756690 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:17:55.756707 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:17:55.756725 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:17:55.756743 | orchestrator | 2026-04-17 06:17:55.756760 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-04-17 06:17:55.756793 | orchestrator | Friday 17 April 2026 06:17:33 +0000 (0:00:01.889) 0:22:35.956 ********** 2026-04-17 06:17:55.756811 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:55.756829 | orchestrator | 2026-04-17 06:17:55.756846 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-04-17 06:17:55.756862 | orchestrator | Friday 17 April 2026 06:17:35 +0000 (0:00:02.141) 0:22:38.098 ********** 2026-04-17 06:17:55.756877 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:55.756893 | orchestrator | 2026-04-17 06:17:55.756942 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-04-17 06:17:55.756962 | orchestrator | Friday 17 April 2026 06:17:37 +0000 (0:00:01.892) 0:22:39.991 ********** 2026-04-17 06:17:55.756981 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:55.756999 | orchestrator | 2026-04-17 06:17:55.757017 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-04-17 06:17:55.757035 | orchestrator | Friday 17 April 2026 06:17:38 +0000 (0:00:01.108) 0:22:41.099 ********** 2026-04-17 06:17:55.757059 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4770', 'value': {'gid': 4770, 'name': 'testbed-node-4', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.14:6817/1995222374', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.14:6816', 'nonce': 1995222374}, {'type': 'v1', 'addr': '192.168.16.14:6817', 'nonce': 1995222374}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-04-17 06:17:55.757082 | orchestrator | 2026-04-17 06:17:55.757100 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-04-17 06:17:55.757119 | orchestrator | Friday 17 April 2026 06:17:38 +0000 (0:00:00.227) 0:22:41.326 ********** 2026-04-17 06:17:55.757137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 06:17:55.757155 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-4) 2026-04-17 06:17:55.757173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 06:17:55.757191 | orchestrator | 2026-04-17 06:17:55.757209 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-04-17 06:17:55.757227 | orchestrator | Friday 17 April 2026 06:17:39 +0000 (0:00:00.976) 0:22:42.302 ********** 2026-04-17 06:17:55.757245 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-04-17 06:17:55.757263 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-04-17 06:17:55.757281 | orchestrator | 2026-04-17 06:17:55.757298 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-04-17 06:17:55.757316 | orchestrator | Friday 17 April 2026 06:17:40 +0000 (0:00:01.027) 0:22:43.330 ********** 2026-04-17 06:17:55.757334 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:17:55.757352 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:17:55.757368 | orchestrator | 2026-04-17 06:17:55.757383 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-04-17 06:17:55.757401 | orchestrator | Friday 17 April 2026 06:17:48 +0000 (0:00:07.882) 0:22:51.212 ********** 2026-04-17 06:17:55.757419 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:17:55.757438 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:17:55.757456 | orchestrator | 2026-04-17 06:17:55.757486 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-04-17 06:17:55.757548 | orchestrator | Friday 17 April 2026 06:17:52 +0000 (0:00:03.780) 0:22:54.992 ********** 2026-04-17 06:17:55.757583 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:17:55.757601 | orchestrator | 2026-04-17 06:17:55.757618 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-04-17 06:17:55.757636 | orchestrator | Friday 17 April 2026 06:17:53 +0000 (0:00:01.175) 0:22:56.168 ********** 2026-04-17 06:17:55.757654 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:17:55.757671 | orchestrator | 2026-04-17 06:17:55.757688 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-04-17 06:17:55.757707 | orchestrator | 2026-04-17 06:17:55.757726 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:17:55.757744 | orchestrator | Friday 17 April 2026 06:17:54 +0000 (0:00:00.842) 0:22:57.011 ********** 2026-04-17 06:17:55.757763 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-17 06:17:55.757781 | orchestrator | 2026-04-17 06:17:55.757800 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:17:55.757818 | orchestrator | Friday 17 April 2026 06:17:54 +0000 (0:00:00.279) 0:22:57.290 ********** 2026-04-17 06:17:55.757835 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:17:55.757854 | orchestrator | 2026-04-17 06:17:55.757872 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:17:55.757890 | orchestrator | Friday 17 April 2026 06:17:55 +0000 (0:00:00.459) 0:22:57.750 ********** 2026-04-17 06:17:55.757909 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:17:55.757928 | orchestrator | 2026-04-17 06:17:55.757946 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:17:55.757998 | orchestrator | Friday 17 April 2026 06:17:55 +0000 (0:00:00.140) 0:22:57.890 ********** 2026-04-17 06:17:55.758096 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:17:55.758112 | orchestrator | 2026-04-17 06:17:55.758121 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:17:55.758131 | orchestrator | Friday 17 April 2026 06:17:55 +0000 (0:00:00.447) 0:22:58.338 ********** 2026-04-17 06:17:55.758140 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:17:55.758150 | orchestrator | 2026-04-17 06:17:55.758174 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:18:03.414852 | orchestrator | Friday 17 April 2026 06:17:55 +0000 (0:00:00.148) 0:22:58.486 ********** 2026-04-17 06:18:03.414978 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:03.415006 | orchestrator | 2026-04-17 06:18:03.415026 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:18:03.415038 | orchestrator | Friday 17 April 2026 06:17:55 +0000 (0:00:00.147) 0:22:58.634 ********** 2026-04-17 06:18:03.415049 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:03.415060 | orchestrator | 2026-04-17 06:18:03.415072 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:18:03.415084 | orchestrator | Friday 17 April 2026 06:17:56 +0000 (0:00:00.156) 0:22:58.790 ********** 2026-04-17 06:18:03.415094 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:03.415106 | orchestrator | 2026-04-17 06:18:03.415117 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:18:03.415127 | orchestrator | Friday 17 April 2026 06:17:56 +0000 (0:00:00.181) 0:22:58.971 ********** 2026-04-17 06:18:03.415138 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:03.415149 | orchestrator | 2026-04-17 06:18:03.415160 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:18:03.415170 | orchestrator | Friday 17 April 2026 06:17:56 +0000 (0:00:00.562) 0:22:59.534 ********** 2026-04-17 06:18:03.415181 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:18:03.415192 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:18:03.415203 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:18:03.415214 | orchestrator | 2026-04-17 06:18:03.415224 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:18:03.415258 | orchestrator | Friday 17 April 2026 06:17:57 +0000 (0:00:00.789) 0:23:00.323 ********** 2026-04-17 06:18:03.415269 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:03.415279 | orchestrator | 2026-04-17 06:18:03.415290 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:18:03.415300 | orchestrator | Friday 17 April 2026 06:17:57 +0000 (0:00:00.299) 0:23:00.622 ********** 2026-04-17 06:18:03.415311 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:18:03.415321 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:18:03.415332 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:18:03.415342 | orchestrator | 2026-04-17 06:18:03.415352 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:18:03.415363 | orchestrator | Friday 17 April 2026 06:17:59 +0000 (0:00:01.956) 0:23:02.579 ********** 2026-04-17 06:18:03.415373 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 06:18:03.415392 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 06:18:03.415411 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 06:18:03.415429 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:03.415448 | orchestrator | 2026-04-17 06:18:03.415468 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:18:03.415488 | orchestrator | Friday 17 April 2026 06:18:00 +0000 (0:00:00.454) 0:23:03.033 ********** 2026-04-17 06:18:03.415567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:18:03.415586 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:18:03.415598 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:18:03.415611 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:03.415624 | orchestrator | 2026-04-17 06:18:03.415637 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:18:03.415650 | orchestrator | Friday 17 April 2026 06:18:01 +0000 (0:00:00.719) 0:23:03.753 ********** 2026-04-17 06:18:03.415665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:03.415700 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:03.415715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:03.415738 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:03.415751 | orchestrator | 2026-04-17 06:18:03.415762 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:18:03.415773 | orchestrator | Friday 17 April 2026 06:18:01 +0000 (0:00:00.190) 0:23:03.944 ********** 2026-04-17 06:18:03.415786 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:17:58.447715', 'end': '2026-04-17 06:17:58.495028', 'delta': '0:00:00.047313', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:18:03.415801 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:17:59.053104', 'end': '2026-04-17 06:17:59.085471', 'delta': '0:00:00.032367', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:18:03.415818 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:17:59.625286', 'end': '2026-04-17 06:17:59.680997', 'delta': '0:00:00.055711', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:18:03.415829 | orchestrator | 2026-04-17 06:18:03.415840 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:18:03.415851 | orchestrator | Friday 17 April 2026 06:18:01 +0000 (0:00:00.226) 0:23:04.170 ********** 2026-04-17 06:18:03.415862 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:03.415873 | orchestrator | 2026-04-17 06:18:03.415884 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:18:03.415894 | orchestrator | Friday 17 April 2026 06:18:01 +0000 (0:00:00.283) 0:23:04.454 ********** 2026-04-17 06:18:03.415905 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:03.415916 | orchestrator | 2026-04-17 06:18:03.415926 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:18:03.415953 | orchestrator | Friday 17 April 2026 06:18:01 +0000 (0:00:00.271) 0:23:04.726 ********** 2026-04-17 06:18:03.415975 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:03.415986 | orchestrator | 2026-04-17 06:18:03.415997 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:18:03.416008 | orchestrator | Friday 17 April 2026 06:18:02 +0000 (0:00:00.172) 0:23:04.898 ********** 2026-04-17 06:18:03.416018 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:18:03.416029 | orchestrator | 2026-04-17 06:18:03.416046 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:18:03.416057 | orchestrator | Friday 17 April 2026 06:18:03 +0000 (0:00:01.054) 0:23:05.953 ********** 2026-04-17 06:18:03.416067 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:03.416078 | orchestrator | 2026-04-17 06:18:03.416088 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:18:03.416099 | orchestrator | Friday 17 April 2026 06:18:03 +0000 (0:00:00.155) 0:23:06.108 ********** 2026-04-17 06:18:03.416117 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:05.549882 | orchestrator | 2026-04-17 06:18:05.549986 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:18:05.550003 | orchestrator | Friday 17 April 2026 06:18:03 +0000 (0:00:00.575) 0:23:06.683 ********** 2026-04-17 06:18:05.550078 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:05.550093 | orchestrator | 2026-04-17 06:18:05.550104 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:18:05.550115 | orchestrator | Friday 17 April 2026 06:18:04 +0000 (0:00:00.241) 0:23:06.925 ********** 2026-04-17 06:18:05.550126 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:05.550136 | orchestrator | 2026-04-17 06:18:05.550153 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:18:05.550172 | orchestrator | Friday 17 April 2026 06:18:04 +0000 (0:00:00.137) 0:23:07.062 ********** 2026-04-17 06:18:05.550189 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:05.550208 | orchestrator | 2026-04-17 06:18:05.550226 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:18:05.550246 | orchestrator | Friday 17 April 2026 06:18:04 +0000 (0:00:00.138) 0:23:07.200 ********** 2026-04-17 06:18:05.550265 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:05.550285 | orchestrator | 2026-04-17 06:18:05.550304 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:18:05.550321 | orchestrator | Friday 17 April 2026 06:18:04 +0000 (0:00:00.183) 0:23:07.384 ********** 2026-04-17 06:18:05.550332 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:05.550343 | orchestrator | 2026-04-17 06:18:05.550354 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:18:05.550364 | orchestrator | Friday 17 April 2026 06:18:04 +0000 (0:00:00.141) 0:23:07.525 ********** 2026-04-17 06:18:05.550375 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:05.550386 | orchestrator | 2026-04-17 06:18:05.550396 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:18:05.550407 | orchestrator | Friday 17 April 2026 06:18:04 +0000 (0:00:00.205) 0:23:07.731 ********** 2026-04-17 06:18:05.550418 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:05.550428 | orchestrator | 2026-04-17 06:18:05.550439 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:18:05.550450 | orchestrator | Friday 17 April 2026 06:18:05 +0000 (0:00:00.136) 0:23:07.867 ********** 2026-04-17 06:18:05.550460 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:05.550471 | orchestrator | 2026-04-17 06:18:05.550482 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:18:05.550492 | orchestrator | Friday 17 April 2026 06:18:05 +0000 (0:00:00.193) 0:23:08.060 ********** 2026-04-17 06:18:05.550534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:18:05.550570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'uuids': ['0c9a4a4e-baea-4a48-b886-e6edd30675e6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB']}})  2026-04-17 06:18:05.550610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdcd9064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:18:05.550644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd']}})  2026-04-17 06:18:05.550657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:18:05.550669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:18:05.550681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:18:05.550693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:18:05.550710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM', 'dm-uuid-CRYPT-LUKS2-23d95080c3d748658de3cafbcbf22080-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:18:05.550729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:18:05.550741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'uuids': ['23d95080-c3d7-4865-8de3-cafbcbf22080'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM']}})  2026-04-17 06:18:05.550760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0']}})  2026-04-17 06:18:05.916579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:18:05.916754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '11ed6889', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:18:05.917526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:18:05.917550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:18:05.917563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB', 'dm-uuid-CRYPT-LUKS2-0c9a4a4ebaea4a48b886e6edd30675e6-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:18:05.917576 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:05.917590 | orchestrator | 2026-04-17 06:18:05.917624 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:18:05.917637 | orchestrator | Friday 17 April 2026 06:18:05 +0000 (0:00:00.402) 0:23:08.462 ********** 2026-04-17 06:18:05.917649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:05.917663 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'uuids': ['0c9a4a4e-baea-4a48-b886-e6edd30675e6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:05.917691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdcd9064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:05.917705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:05.917717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:05.917737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.045799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.045945 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.045998 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM', 'dm-uuid-CRYPT-LUKS2-23d95080c3d748658de3cafbcbf22080-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.046011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.046082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'uuids': ['23d95080-c3d7-4865-8de3-cafbcbf22080'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.046117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.046142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.046162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '11ed6889', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.046176 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:06.046195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:18.424109 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB', 'dm-uuid-CRYPT-LUKS2-0c9a4a4ebaea4a48b886e6edd30675e6-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:18:18.424225 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.424242 | orchestrator | 2026-04-17 06:18:18.424271 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:18:18.424284 | orchestrator | Friday 17 April 2026 06:18:06 +0000 (0:00:00.469) 0:23:08.932 ********** 2026-04-17 06:18:18.424295 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:18.424306 | orchestrator | 2026-04-17 06:18:18.424317 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:18:18.424328 | orchestrator | Friday 17 April 2026 06:18:06 +0000 (0:00:00.515) 0:23:09.448 ********** 2026-04-17 06:18:18.424339 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:18.424349 | orchestrator | 2026-04-17 06:18:18.424360 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:18:18.424371 | orchestrator | Friday 17 April 2026 06:18:06 +0000 (0:00:00.155) 0:23:09.603 ********** 2026-04-17 06:18:18.424382 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:18.424393 | orchestrator | 2026-04-17 06:18:18.424404 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:18:18.424415 | orchestrator | Friday 17 April 2026 06:18:07 +0000 (0:00:00.986) 0:23:10.590 ********** 2026-04-17 06:18:18.424426 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.424437 | orchestrator | 2026-04-17 06:18:18.424447 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:18:18.424458 | orchestrator | Friday 17 April 2026 06:18:07 +0000 (0:00:00.135) 0:23:10.725 ********** 2026-04-17 06:18:18.424468 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.424479 | orchestrator | 2026-04-17 06:18:18.424490 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:18:18.424500 | orchestrator | Friday 17 April 2026 06:18:08 +0000 (0:00:00.283) 0:23:11.008 ********** 2026-04-17 06:18:18.424511 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.424550 | orchestrator | 2026-04-17 06:18:18.424563 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:18:18.424574 | orchestrator | Friday 17 April 2026 06:18:08 +0000 (0:00:00.175) 0:23:11.183 ********** 2026-04-17 06:18:18.424585 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-17 06:18:18.424597 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-17 06:18:18.424608 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-17 06:18:18.424621 | orchestrator | 2026-04-17 06:18:18.424633 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:18:18.424646 | orchestrator | Friday 17 April 2026 06:18:09 +0000 (0:00:00.777) 0:23:11.961 ********** 2026-04-17 06:18:18.424658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 06:18:18.424671 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 06:18:18.424683 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 06:18:18.424695 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.424729 | orchestrator | 2026-04-17 06:18:18.424742 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:18:18.424754 | orchestrator | Friday 17 April 2026 06:18:09 +0000 (0:00:00.184) 0:23:12.145 ********** 2026-04-17 06:18:18.424767 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-17 06:18:18.424780 | orchestrator | 2026-04-17 06:18:18.424793 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:18:18.424807 | orchestrator | Friday 17 April 2026 06:18:09 +0000 (0:00:00.257) 0:23:12.402 ********** 2026-04-17 06:18:18.424820 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.424831 | orchestrator | 2026-04-17 06:18:18.424843 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:18:18.424856 | orchestrator | Friday 17 April 2026 06:18:09 +0000 (0:00:00.168) 0:23:12.571 ********** 2026-04-17 06:18:18.424867 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.424880 | orchestrator | 2026-04-17 06:18:18.424892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:18:18.424904 | orchestrator | Friday 17 April 2026 06:18:09 +0000 (0:00:00.159) 0:23:12.730 ********** 2026-04-17 06:18:18.424917 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.424928 | orchestrator | 2026-04-17 06:18:18.424940 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:18:18.424953 | orchestrator | Friday 17 April 2026 06:18:10 +0000 (0:00:00.173) 0:23:12.904 ********** 2026-04-17 06:18:18.424965 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:18.424975 | orchestrator | 2026-04-17 06:18:18.424986 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:18:18.424997 | orchestrator | Friday 17 April 2026 06:18:10 +0000 (0:00:00.275) 0:23:13.180 ********** 2026-04-17 06:18:18.425007 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:18:18.425036 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:18:18.425047 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:18:18.425058 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.425069 | orchestrator | 2026-04-17 06:18:18.425080 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:18:18.425091 | orchestrator | Friday 17 April 2026 06:18:11 +0000 (0:00:00.985) 0:23:14.165 ********** 2026-04-17 06:18:18.425102 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:18:18.425113 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:18:18.425123 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:18:18.425134 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.425145 | orchestrator | 2026-04-17 06:18:18.425155 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:18:18.425166 | orchestrator | Friday 17 April 2026 06:18:12 +0000 (0:00:00.923) 0:23:15.088 ********** 2026-04-17 06:18:18.425177 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:18:18.425187 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:18:18.425203 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:18:18.425214 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.425225 | orchestrator | 2026-04-17 06:18:18.425235 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:18:18.425246 | orchestrator | Friday 17 April 2026 06:18:13 +0000 (0:00:01.296) 0:23:16.385 ********** 2026-04-17 06:18:18.425257 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:18.425267 | orchestrator | 2026-04-17 06:18:18.425278 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:18:18.425288 | orchestrator | Friday 17 April 2026 06:18:13 +0000 (0:00:00.174) 0:23:16.560 ********** 2026-04-17 06:18:18.425299 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 06:18:18.425317 | orchestrator | 2026-04-17 06:18:18.425328 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:18:18.425339 | orchestrator | Friday 17 April 2026 06:18:14 +0000 (0:00:00.438) 0:23:16.999 ********** 2026-04-17 06:18:18.425350 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:18:18.425361 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:18:18.425371 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:18:18.425382 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:18:18.425392 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-17 06:18:18.425403 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:18:18.425414 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:18:18.425424 | orchestrator | 2026-04-17 06:18:18.425435 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:18:18.425446 | orchestrator | Friday 17 April 2026 06:18:15 +0000 (0:00:00.925) 0:23:17.924 ********** 2026-04-17 06:18:18.425456 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:18:18.425467 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:18:18.425477 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:18:18.425488 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:18:18.425498 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-17 06:18:18.425509 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:18:18.425540 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:18:18.425560 | orchestrator | 2026-04-17 06:18:18.425578 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-04-17 06:18:18.425593 | orchestrator | Friday 17 April 2026 06:18:17 +0000 (0:00:01.944) 0:23:19.869 ********** 2026-04-17 06:18:18.425604 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.425615 | orchestrator | 2026-04-17 06:18:18.425625 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:18:18.425636 | orchestrator | Friday 17 April 2026 06:18:17 +0000 (0:00:00.141) 0:23:20.010 ********** 2026-04-17 06:18:18.425646 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-17 06:18:18.425657 | orchestrator | 2026-04-17 06:18:18.425668 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:18:18.425678 | orchestrator | Friday 17 April 2026 06:18:17 +0000 (0:00:00.230) 0:23:20.241 ********** 2026-04-17 06:18:18.425689 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-17 06:18:18.425700 | orchestrator | 2026-04-17 06:18:18.425710 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:18:18.425721 | orchestrator | Friday 17 April 2026 06:18:17 +0000 (0:00:00.228) 0:23:20.470 ********** 2026-04-17 06:18:18.425731 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:18.425742 | orchestrator | 2026-04-17 06:18:18.425752 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:18:18.425763 | orchestrator | Friday 17 April 2026 06:18:17 +0000 (0:00:00.157) 0:23:20.627 ********** 2026-04-17 06:18:18.425774 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:18.425784 | orchestrator | 2026-04-17 06:18:18.425795 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:18:18.425812 | orchestrator | Friday 17 April 2026 06:18:18 +0000 (0:00:00.534) 0:23:21.162 ********** 2026-04-17 06:18:30.945376 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.945507 | orchestrator | 2026-04-17 06:18:30.945523 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:18:30.945535 | orchestrator | Friday 17 April 2026 06:18:19 +0000 (0:00:00.917) 0:23:22.079 ********** 2026-04-17 06:18:30.945601 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.945611 | orchestrator | 2026-04-17 06:18:30.945621 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:18:30.945631 | orchestrator | Friday 17 April 2026 06:18:19 +0000 (0:00:00.519) 0:23:22.599 ********** 2026-04-17 06:18:30.945640 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.945651 | orchestrator | 2026-04-17 06:18:30.945661 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:18:30.945670 | orchestrator | Friday 17 April 2026 06:18:20 +0000 (0:00:00.147) 0:23:22.747 ********** 2026-04-17 06:18:30.945680 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.945690 | orchestrator | 2026-04-17 06:18:30.945699 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:18:30.945709 | orchestrator | Friday 17 April 2026 06:18:20 +0000 (0:00:00.164) 0:23:22.911 ********** 2026-04-17 06:18:30.945733 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.945743 | orchestrator | 2026-04-17 06:18:30.945753 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:18:30.945763 | orchestrator | Friday 17 April 2026 06:18:20 +0000 (0:00:00.132) 0:23:23.044 ********** 2026-04-17 06:18:30.945773 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.945782 | orchestrator | 2026-04-17 06:18:30.945792 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:18:30.945802 | orchestrator | Friday 17 April 2026 06:18:20 +0000 (0:00:00.545) 0:23:23.589 ********** 2026-04-17 06:18:30.945812 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.945821 | orchestrator | 2026-04-17 06:18:30.945831 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:18:30.945840 | orchestrator | Friday 17 April 2026 06:18:21 +0000 (0:00:00.562) 0:23:24.152 ********** 2026-04-17 06:18:30.945850 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.945859 | orchestrator | 2026-04-17 06:18:30.945869 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:18:30.945878 | orchestrator | Friday 17 April 2026 06:18:21 +0000 (0:00:00.149) 0:23:24.301 ********** 2026-04-17 06:18:30.945890 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.945901 | orchestrator | 2026-04-17 06:18:30.945911 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:18:30.945922 | orchestrator | Friday 17 April 2026 06:18:21 +0000 (0:00:00.134) 0:23:24.436 ********** 2026-04-17 06:18:30.945933 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.945944 | orchestrator | 2026-04-17 06:18:30.945955 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:18:30.945966 | orchestrator | Friday 17 April 2026 06:18:21 +0000 (0:00:00.159) 0:23:24.595 ********** 2026-04-17 06:18:30.945976 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.945987 | orchestrator | 2026-04-17 06:18:30.945998 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:18:30.946008 | orchestrator | Friday 17 April 2026 06:18:22 +0000 (0:00:00.164) 0:23:24.759 ********** 2026-04-17 06:18:30.946075 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.946087 | orchestrator | 2026-04-17 06:18:30.946099 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:18:30.946109 | orchestrator | Friday 17 April 2026 06:18:22 +0000 (0:00:00.163) 0:23:24.923 ********** 2026-04-17 06:18:30.946120 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946130 | orchestrator | 2026-04-17 06:18:30.946141 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:18:30.946152 | orchestrator | Friday 17 April 2026 06:18:22 +0000 (0:00:00.128) 0:23:25.052 ********** 2026-04-17 06:18:30.946163 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946214 | orchestrator | 2026-04-17 06:18:30.946227 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:18:30.946238 | orchestrator | Friday 17 April 2026 06:18:22 +0000 (0:00:00.611) 0:23:25.663 ********** 2026-04-17 06:18:30.946248 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946258 | orchestrator | 2026-04-17 06:18:30.946267 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:18:30.946277 | orchestrator | Friday 17 April 2026 06:18:23 +0000 (0:00:00.167) 0:23:25.830 ********** 2026-04-17 06:18:30.946287 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.946296 | orchestrator | 2026-04-17 06:18:30.946306 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:18:30.946315 | orchestrator | Friday 17 April 2026 06:18:23 +0000 (0:00:00.170) 0:23:26.001 ********** 2026-04-17 06:18:30.946325 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.946334 | orchestrator | 2026-04-17 06:18:30.946344 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:18:30.946353 | orchestrator | Friday 17 April 2026 06:18:23 +0000 (0:00:00.237) 0:23:26.239 ********** 2026-04-17 06:18:30.946363 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946372 | orchestrator | 2026-04-17 06:18:30.946382 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:18:30.946391 | orchestrator | Friday 17 April 2026 06:18:23 +0000 (0:00:00.143) 0:23:26.382 ********** 2026-04-17 06:18:30.946401 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946410 | orchestrator | 2026-04-17 06:18:30.946420 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:18:30.946429 | orchestrator | Friday 17 April 2026 06:18:23 +0000 (0:00:00.136) 0:23:26.519 ********** 2026-04-17 06:18:30.946439 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946448 | orchestrator | 2026-04-17 06:18:30.946458 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:18:30.946467 | orchestrator | Friday 17 April 2026 06:18:23 +0000 (0:00:00.149) 0:23:26.668 ********** 2026-04-17 06:18:30.946477 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946487 | orchestrator | 2026-04-17 06:18:30.946496 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:18:30.946523 | orchestrator | Friday 17 April 2026 06:18:24 +0000 (0:00:00.137) 0:23:26.806 ********** 2026-04-17 06:18:30.946533 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946577 | orchestrator | 2026-04-17 06:18:30.946587 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:18:30.946596 | orchestrator | Friday 17 April 2026 06:18:24 +0000 (0:00:00.191) 0:23:26.998 ********** 2026-04-17 06:18:30.946606 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946616 | orchestrator | 2026-04-17 06:18:30.946625 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:18:30.946635 | orchestrator | Friday 17 April 2026 06:18:24 +0000 (0:00:00.149) 0:23:27.147 ********** 2026-04-17 06:18:30.946644 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946654 | orchestrator | 2026-04-17 06:18:30.946664 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:18:30.946674 | orchestrator | Friday 17 April 2026 06:18:24 +0000 (0:00:00.127) 0:23:27.275 ********** 2026-04-17 06:18:30.946684 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946693 | orchestrator | 2026-04-17 06:18:30.946703 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:18:30.946718 | orchestrator | Friday 17 April 2026 06:18:24 +0000 (0:00:00.142) 0:23:27.417 ********** 2026-04-17 06:18:30.946728 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946738 | orchestrator | 2026-04-17 06:18:30.946748 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:18:30.946757 | orchestrator | Friday 17 April 2026 06:18:25 +0000 (0:00:00.595) 0:23:28.013 ********** 2026-04-17 06:18:30.946774 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946784 | orchestrator | 2026-04-17 06:18:30.946793 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:18:30.946803 | orchestrator | Friday 17 April 2026 06:18:25 +0000 (0:00:00.145) 0:23:28.158 ********** 2026-04-17 06:18:30.946813 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946822 | orchestrator | 2026-04-17 06:18:30.946832 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:18:30.946841 | orchestrator | Friday 17 April 2026 06:18:25 +0000 (0:00:00.142) 0:23:28.301 ********** 2026-04-17 06:18:30.946851 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.946861 | orchestrator | 2026-04-17 06:18:30.946870 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:18:30.946880 | orchestrator | Friday 17 April 2026 06:18:25 +0000 (0:00:00.219) 0:23:28.521 ********** 2026-04-17 06:18:30.946889 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.946899 | orchestrator | 2026-04-17 06:18:30.946909 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:18:30.946918 | orchestrator | Friday 17 April 2026 06:18:26 +0000 (0:00:00.926) 0:23:29.448 ********** 2026-04-17 06:18:30.946928 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.946937 | orchestrator | 2026-04-17 06:18:30.946947 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:18:30.946957 | orchestrator | Friday 17 April 2026 06:18:27 +0000 (0:00:01.209) 0:23:30.657 ********** 2026-04-17 06:18:30.946966 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-17 06:18:30.946977 | orchestrator | 2026-04-17 06:18:30.946986 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:18:30.946996 | orchestrator | Friday 17 April 2026 06:18:28 +0000 (0:00:00.245) 0:23:30.902 ********** 2026-04-17 06:18:30.947005 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.947015 | orchestrator | 2026-04-17 06:18:30.947024 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:18:30.947034 | orchestrator | Friday 17 April 2026 06:18:28 +0000 (0:00:00.147) 0:23:31.050 ********** 2026-04-17 06:18:30.947043 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.947053 | orchestrator | 2026-04-17 06:18:30.947063 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:18:30.947072 | orchestrator | Friday 17 April 2026 06:18:28 +0000 (0:00:00.163) 0:23:31.213 ********** 2026-04-17 06:18:30.947082 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:18:30.947091 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:18:30.947101 | orchestrator | 2026-04-17 06:18:30.947111 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:18:30.947120 | orchestrator | Friday 17 April 2026 06:18:29 +0000 (0:00:00.803) 0:23:32.017 ********** 2026-04-17 06:18:30.947130 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:30.947139 | orchestrator | 2026-04-17 06:18:30.947149 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:18:30.947159 | orchestrator | Friday 17 April 2026 06:18:29 +0000 (0:00:00.462) 0:23:32.480 ********** 2026-04-17 06:18:30.947168 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.947178 | orchestrator | 2026-04-17 06:18:30.947188 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:18:30.947197 | orchestrator | Friday 17 April 2026 06:18:29 +0000 (0:00:00.161) 0:23:32.641 ********** 2026-04-17 06:18:30.947207 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.947216 | orchestrator | 2026-04-17 06:18:30.947226 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:18:30.947236 | orchestrator | Friday 17 April 2026 06:18:30 +0000 (0:00:00.655) 0:23:33.297 ********** 2026-04-17 06:18:30.947245 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:30.947255 | orchestrator | 2026-04-17 06:18:30.947270 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:18:30.947280 | orchestrator | Friday 17 April 2026 06:18:30 +0000 (0:00:00.160) 0:23:33.458 ********** 2026-04-17 06:18:30.947289 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-17 06:18:30.947299 | orchestrator | 2026-04-17 06:18:30.947308 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:18:30.947324 | orchestrator | Friday 17 April 2026 06:18:30 +0000 (0:00:00.222) 0:23:33.681 ********** 2026-04-17 06:18:46.113432 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:46.113624 | orchestrator | 2026-04-17 06:18:46.113647 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:18:46.113660 | orchestrator | Friday 17 April 2026 06:18:31 +0000 (0:00:00.714) 0:23:34.395 ********** 2026-04-17 06:18:46.113673 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:18:46.113684 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:18:46.113695 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:18:46.113706 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.113718 | orchestrator | 2026-04-17 06:18:46.113729 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:18:46.113740 | orchestrator | Friday 17 April 2026 06:18:31 +0000 (0:00:00.140) 0:23:34.536 ********** 2026-04-17 06:18:46.113751 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.113761 | orchestrator | 2026-04-17 06:18:46.113789 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:18:46.113800 | orchestrator | Friday 17 April 2026 06:18:31 +0000 (0:00:00.150) 0:23:34.686 ********** 2026-04-17 06:18:46.113811 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.113822 | orchestrator | 2026-04-17 06:18:46.113833 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:18:46.113843 | orchestrator | Friday 17 April 2026 06:18:32 +0000 (0:00:00.185) 0:23:34.872 ********** 2026-04-17 06:18:46.113854 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.113865 | orchestrator | 2026-04-17 06:18:46.113875 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:18:46.113903 | orchestrator | Friday 17 April 2026 06:18:32 +0000 (0:00:00.183) 0:23:35.056 ********** 2026-04-17 06:18:46.113914 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.113925 | orchestrator | 2026-04-17 06:18:46.113936 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:18:46.113950 | orchestrator | Friday 17 April 2026 06:18:32 +0000 (0:00:00.164) 0:23:35.220 ********** 2026-04-17 06:18:46.113963 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.113975 | orchestrator | 2026-04-17 06:18:46.113987 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:18:46.113999 | orchestrator | Friday 17 April 2026 06:18:32 +0000 (0:00:00.164) 0:23:35.385 ********** 2026-04-17 06:18:46.114012 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:46.114090 | orchestrator | 2026-04-17 06:18:46.114104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:18:46.114117 | orchestrator | Friday 17 April 2026 06:18:34 +0000 (0:00:01.454) 0:23:36.840 ********** 2026-04-17 06:18:46.114129 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:46.114141 | orchestrator | 2026-04-17 06:18:46.114154 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:18:46.114166 | orchestrator | Friday 17 April 2026 06:18:34 +0000 (0:00:00.139) 0:23:36.979 ********** 2026-04-17 06:18:46.114178 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-17 06:18:46.114191 | orchestrator | 2026-04-17 06:18:46.114203 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:18:46.114215 | orchestrator | Friday 17 April 2026 06:18:34 +0000 (0:00:00.637) 0:23:37.617 ********** 2026-04-17 06:18:46.114252 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.114266 | orchestrator | 2026-04-17 06:18:46.114278 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:18:46.114291 | orchestrator | Friday 17 April 2026 06:18:35 +0000 (0:00:00.161) 0:23:37.778 ********** 2026-04-17 06:18:46.114303 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.114314 | orchestrator | 2026-04-17 06:18:46.114325 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:18:46.114336 | orchestrator | Friday 17 April 2026 06:18:35 +0000 (0:00:00.174) 0:23:37.953 ********** 2026-04-17 06:18:46.114346 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.114357 | orchestrator | 2026-04-17 06:18:46.114368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:18:46.114378 | orchestrator | Friday 17 April 2026 06:18:35 +0000 (0:00:00.170) 0:23:38.124 ********** 2026-04-17 06:18:46.114389 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.114400 | orchestrator | 2026-04-17 06:18:46.114411 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:18:46.114422 | orchestrator | Friday 17 April 2026 06:18:35 +0000 (0:00:00.179) 0:23:38.304 ********** 2026-04-17 06:18:46.114433 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.114443 | orchestrator | 2026-04-17 06:18:46.114454 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:18:46.114465 | orchestrator | Friday 17 April 2026 06:18:35 +0000 (0:00:00.163) 0:23:38.467 ********** 2026-04-17 06:18:46.114476 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.114486 | orchestrator | 2026-04-17 06:18:46.114497 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:18:46.114508 | orchestrator | Friday 17 April 2026 06:18:35 +0000 (0:00:00.155) 0:23:38.623 ********** 2026-04-17 06:18:46.114518 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.114529 | orchestrator | 2026-04-17 06:18:46.114540 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:18:46.114550 | orchestrator | Friday 17 April 2026 06:18:36 +0000 (0:00:00.181) 0:23:38.804 ********** 2026-04-17 06:18:46.114590 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.114602 | orchestrator | 2026-04-17 06:18:46.114613 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:18:46.114623 | orchestrator | Friday 17 April 2026 06:18:36 +0000 (0:00:00.174) 0:23:38.979 ********** 2026-04-17 06:18:46.114634 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:18:46.114644 | orchestrator | 2026-04-17 06:18:46.114655 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:18:46.114686 | orchestrator | Friday 17 April 2026 06:18:36 +0000 (0:00:00.227) 0:23:39.207 ********** 2026-04-17 06:18:46.114697 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-17 06:18:46.114709 | orchestrator | 2026-04-17 06:18:46.114720 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:18:46.114731 | orchestrator | Friday 17 April 2026 06:18:37 +0000 (0:00:00.583) 0:23:39.791 ********** 2026-04-17 06:18:46.114742 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-17 06:18:46.114753 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-17 06:18:46.114763 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-17 06:18:46.114774 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-17 06:18:46.114785 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-17 06:18:46.114795 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-17 06:18:46.114806 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-17 06:18:46.114816 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:18:46.114834 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:18:46.114854 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:18:46.114864 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:18:46.114875 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:18:46.114886 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:18:46.114896 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:18:46.114907 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-17 06:18:46.114918 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-17 06:18:46.114929 | orchestrator | 2026-04-17 06:18:46.114939 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:18:46.114950 | orchestrator | Friday 17 April 2026 06:18:42 +0000 (0:00:05.477) 0:23:45.268 ********** 2026-04-17 06:18:46.114961 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-17 06:18:46.114971 | orchestrator | 2026-04-17 06:18:46.114982 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 06:18:46.114993 | orchestrator | Friday 17 April 2026 06:18:42 +0000 (0:00:00.249) 0:23:45.518 ********** 2026-04-17 06:18:46.115004 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:18:46.115016 | orchestrator | 2026-04-17 06:18:46.115027 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 06:18:46.115037 | orchestrator | Friday 17 April 2026 06:18:43 +0000 (0:00:00.544) 0:23:46.063 ********** 2026-04-17 06:18:46.115048 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:18:46.115059 | orchestrator | 2026-04-17 06:18:46.115069 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:18:46.115080 | orchestrator | Friday 17 April 2026 06:18:44 +0000 (0:00:00.984) 0:23:47.048 ********** 2026-04-17 06:18:46.115091 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115101 | orchestrator | 2026-04-17 06:18:46.115112 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:18:46.115123 | orchestrator | Friday 17 April 2026 06:18:44 +0000 (0:00:00.139) 0:23:47.187 ********** 2026-04-17 06:18:46.115133 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115144 | orchestrator | 2026-04-17 06:18:46.115155 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:18:46.115166 | orchestrator | Friday 17 April 2026 06:18:44 +0000 (0:00:00.139) 0:23:47.326 ********** 2026-04-17 06:18:46.115176 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115187 | orchestrator | 2026-04-17 06:18:46.115197 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:18:46.115208 | orchestrator | Friday 17 April 2026 06:18:44 +0000 (0:00:00.142) 0:23:47.469 ********** 2026-04-17 06:18:46.115219 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115229 | orchestrator | 2026-04-17 06:18:46.115240 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:18:46.115250 | orchestrator | Friday 17 April 2026 06:18:44 +0000 (0:00:00.133) 0:23:47.603 ********** 2026-04-17 06:18:46.115261 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115272 | orchestrator | 2026-04-17 06:18:46.115283 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:18:46.115293 | orchestrator | Friday 17 April 2026 06:18:44 +0000 (0:00:00.137) 0:23:47.740 ********** 2026-04-17 06:18:46.115304 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115315 | orchestrator | 2026-04-17 06:18:46.115325 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:18:46.115336 | orchestrator | Friday 17 April 2026 06:18:45 +0000 (0:00:00.159) 0:23:47.899 ********** 2026-04-17 06:18:46.115353 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115364 | orchestrator | 2026-04-17 06:18:46.115374 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:18:46.115385 | orchestrator | Friday 17 April 2026 06:18:45 +0000 (0:00:00.191) 0:23:48.091 ********** 2026-04-17 06:18:46.115396 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115407 | orchestrator | 2026-04-17 06:18:46.115418 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:18:46.115428 | orchestrator | Friday 17 April 2026 06:18:45 +0000 (0:00:00.606) 0:23:48.697 ********** 2026-04-17 06:18:46.115439 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:18:46.115450 | orchestrator | 2026-04-17 06:18:46.115468 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:19:12.053804 | orchestrator | Friday 17 April 2026 06:18:46 +0000 (0:00:00.146) 0:23:48.844 ********** 2026-04-17 06:19:12.053925 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.053942 | orchestrator | 2026-04-17 06:19:12.053955 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:19:12.053966 | orchestrator | Friday 17 April 2026 06:18:46 +0000 (0:00:00.140) 0:23:48.985 ********** 2026-04-17 06:19:12.053977 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.053988 | orchestrator | 2026-04-17 06:19:12.053999 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:19:12.054010 | orchestrator | Friday 17 April 2026 06:18:46 +0000 (0:00:00.154) 0:23:49.140 ********** 2026-04-17 06:19:12.054083 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:19:12.054096 | orchestrator | 2026-04-17 06:19:12.054107 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:19:12.054135 | orchestrator | Friday 17 April 2026 06:18:49 +0000 (0:00:03.555) 0:23:52.696 ********** 2026-04-17 06:19:12.054147 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:19:12.054160 | orchestrator | 2026-04-17 06:19:12.054171 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:19:12.054182 | orchestrator | Friday 17 April 2026 06:18:50 +0000 (0:00:00.197) 0:23:52.893 ********** 2026-04-17 06:19:12.054195 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-17 06:19:12.054211 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-17 06:19:12.054224 | orchestrator | 2026-04-17 06:19:12.054235 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:19:12.054246 | orchestrator | Friday 17 April 2026 06:18:54 +0000 (0:00:03.881) 0:23:56.775 ********** 2026-04-17 06:19:12.054257 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054268 | orchestrator | 2026-04-17 06:19:12.054279 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:19:12.054290 | orchestrator | Friday 17 April 2026 06:18:54 +0000 (0:00:00.140) 0:23:56.916 ********** 2026-04-17 06:19:12.054301 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054311 | orchestrator | 2026-04-17 06:19:12.054323 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:19:12.054334 | orchestrator | Friday 17 April 2026 06:18:54 +0000 (0:00:00.153) 0:23:57.069 ********** 2026-04-17 06:19:12.054345 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054378 | orchestrator | 2026-04-17 06:19:12.054389 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:19:12.054400 | orchestrator | Friday 17 April 2026 06:18:54 +0000 (0:00:00.190) 0:23:57.259 ********** 2026-04-17 06:19:12.054411 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054421 | orchestrator | 2026-04-17 06:19:12.054432 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:19:12.054443 | orchestrator | Friday 17 April 2026 06:18:54 +0000 (0:00:00.159) 0:23:57.419 ********** 2026-04-17 06:19:12.054454 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054464 | orchestrator | 2026-04-17 06:19:12.054475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:19:12.054486 | orchestrator | Friday 17 April 2026 06:18:54 +0000 (0:00:00.210) 0:23:57.629 ********** 2026-04-17 06:19:12.054496 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.054508 | orchestrator | 2026-04-17 06:19:12.054519 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:19:12.054530 | orchestrator | Friday 17 April 2026 06:18:55 +0000 (0:00:00.250) 0:23:57.880 ********** 2026-04-17 06:19:12.054540 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:19:12.054552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:19:12.054563 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:19:12.054573 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054605 | orchestrator | 2026-04-17 06:19:12.054616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:19:12.054627 | orchestrator | Friday 17 April 2026 06:18:56 +0000 (0:00:01.020) 0:23:58.900 ********** 2026-04-17 06:19:12.054638 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:19:12.054649 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:19:12.054659 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:19:12.054670 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054681 | orchestrator | 2026-04-17 06:19:12.054691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:19:12.054702 | orchestrator | Friday 17 April 2026 06:18:57 +0000 (0:00:01.366) 0:24:00.267 ********** 2026-04-17 06:19:12.054713 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:19:12.054724 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:19:12.054734 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:19:12.054762 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054774 | orchestrator | 2026-04-17 06:19:12.054785 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:19:12.054795 | orchestrator | Friday 17 April 2026 06:18:58 +0000 (0:00:00.506) 0:24:00.774 ********** 2026-04-17 06:19:12.054806 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.054817 | orchestrator | 2026-04-17 06:19:12.054827 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:19:12.054838 | orchestrator | Friday 17 April 2026 06:18:58 +0000 (0:00:00.246) 0:24:01.020 ********** 2026-04-17 06:19:12.054849 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 06:19:12.054859 | orchestrator | 2026-04-17 06:19:12.054870 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:19:12.054881 | orchestrator | Friday 17 April 2026 06:18:58 +0000 (0:00:00.484) 0:24:01.504 ********** 2026-04-17 06:19:12.054892 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.054903 | orchestrator | 2026-04-17 06:19:12.054919 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-17 06:19:12.054931 | orchestrator | Friday 17 April 2026 06:18:59 +0000 (0:00:00.815) 0:24:02.319 ********** 2026-04-17 06:19:12.054941 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.054952 | orchestrator | 2026-04-17 06:19:12.054963 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-17 06:19:12.054983 | orchestrator | Friday 17 April 2026 06:18:59 +0000 (0:00:00.164) 0:24:02.484 ********** 2026-04-17 06:19:12.054994 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4 2026-04-17 06:19:12.055005 | orchestrator | 2026-04-17 06:19:12.055015 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-17 06:19:12.055026 | orchestrator | Friday 17 April 2026 06:19:00 +0000 (0:00:00.605) 0:24:03.089 ********** 2026-04-17 06:19:12.055037 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 06:19:12.055048 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-17 06:19:12.055058 | orchestrator | 2026-04-17 06:19:12.055069 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-17 06:19:12.055080 | orchestrator | Friday 17 April 2026 06:19:01 +0000 (0:00:00.824) 0:24:03.914 ********** 2026-04-17 06:19:12.055090 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:19:12.055101 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 06:19:12.055112 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:19:12.055123 | orchestrator | 2026-04-17 06:19:12.055134 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:19:12.055144 | orchestrator | Friday 17 April 2026 06:19:03 +0000 (0:00:02.161) 0:24:06.075 ********** 2026-04-17 06:19:12.055155 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-17 06:19:12.055166 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 06:19:12.055176 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.055187 | orchestrator | 2026-04-17 06:19:12.055198 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-17 06:19:12.055208 | orchestrator | Friday 17 April 2026 06:19:04 +0000 (0:00:01.010) 0:24:07.086 ********** 2026-04-17 06:19:12.055219 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.055230 | orchestrator | 2026-04-17 06:19:12.055240 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-17 06:19:12.055251 | orchestrator | Friday 17 April 2026 06:19:05 +0000 (0:00:00.972) 0:24:08.059 ********** 2026-04-17 06:19:12.055262 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:12.055272 | orchestrator | 2026-04-17 06:19:12.055283 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-17 06:19:12.055294 | orchestrator | Friday 17 April 2026 06:19:05 +0000 (0:00:00.133) 0:24:08.192 ********** 2026-04-17 06:19:12.055305 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4 2026-04-17 06:19:12.055316 | orchestrator | 2026-04-17 06:19:12.055327 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-17 06:19:12.055337 | orchestrator | Friday 17 April 2026 06:19:06 +0000 (0:00:00.633) 0:24:08.825 ********** 2026-04-17 06:19:12.055348 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4 2026-04-17 06:19:12.055358 | orchestrator | 2026-04-17 06:19:12.055369 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-17 06:19:12.055380 | orchestrator | Friday 17 April 2026 06:19:06 +0000 (0:00:00.611) 0:24:09.437 ********** 2026-04-17 06:19:12.055390 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.055401 | orchestrator | 2026-04-17 06:19:12.055412 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-17 06:19:12.055423 | orchestrator | Friday 17 April 2026 06:19:07 +0000 (0:00:01.033) 0:24:10.471 ********** 2026-04-17 06:19:12.055434 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.055444 | orchestrator | 2026-04-17 06:19:12.055455 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-17 06:19:12.055466 | orchestrator | Friday 17 April 2026 06:19:08 +0000 (0:00:00.960) 0:24:11.431 ********** 2026-04-17 06:19:12.055477 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.055487 | orchestrator | 2026-04-17 06:19:12.055505 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-17 06:19:12.055515 | orchestrator | Friday 17 April 2026 06:19:09 +0000 (0:00:01.244) 0:24:12.676 ********** 2026-04-17 06:19:12.055526 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.055537 | orchestrator | 2026-04-17 06:19:12.055547 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-17 06:19:12.055558 | orchestrator | Friday 17 April 2026 06:19:11 +0000 (0:00:01.293) 0:24:13.970 ********** 2026-04-17 06:19:12.055568 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:12.055618 | orchestrator | 2026-04-17 06:19:12.055631 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-04-17 06:19:12.055642 | orchestrator | Friday 17 April 2026 06:19:11 +0000 (0:00:00.766) 0:24:14.736 ********** 2026-04-17 06:19:12.055661 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:19:30.545819 | orchestrator | 2026-04-17 06:19:30.545937 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-04-17 06:19:30.545953 | orchestrator | Friday 17 April 2026 06:19:12 +0000 (0:00:00.159) 0:24:14.895 ********** 2026-04-17 06:19:30.545965 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:19:30.545978 | orchestrator | 2026-04-17 06:19:30.545989 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-04-17 06:19:30.546002 | orchestrator | 2026-04-17 06:19:30.546013 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:19:30.546119 | orchestrator | Friday 17 April 2026 06:19:20 +0000 (0:00:08.693) 0:24:23.588 ********** 2026-04-17 06:19:30.546132 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5, testbed-node-3 2026-04-17 06:19:30.546144 | orchestrator | 2026-04-17 06:19:30.546155 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:19:30.546181 | orchestrator | Friday 17 April 2026 06:19:21 +0000 (0:00:00.446) 0:24:24.035 ********** 2026-04-17 06:19:30.546192 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:30.546204 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:30.546215 | orchestrator | 2026-04-17 06:19:30.546226 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:19:30.546237 | orchestrator | Friday 17 April 2026 06:19:21 +0000 (0:00:00.585) 0:24:24.620 ********** 2026-04-17 06:19:30.546248 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:30.546260 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:30.546280 | orchestrator | 2026-04-17 06:19:30.546298 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:19:30.546316 | orchestrator | Friday 17 April 2026 06:19:22 +0000 (0:00:00.251) 0:24:24.872 ********** 2026-04-17 06:19:30.546334 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:30.546354 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:30.546374 | orchestrator | 2026-04-17 06:19:30.546394 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:19:30.546414 | orchestrator | Friday 17 April 2026 06:19:22 +0000 (0:00:00.544) 0:24:25.416 ********** 2026-04-17 06:19:30.546429 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:30.546441 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:30.546453 | orchestrator | 2026-04-17 06:19:30.546465 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:19:30.546478 | orchestrator | Friday 17 April 2026 06:19:23 +0000 (0:00:00.734) 0:24:26.151 ********** 2026-04-17 06:19:30.546490 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:30.546503 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:30.546515 | orchestrator | 2026-04-17 06:19:30.546527 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:19:30.546540 | orchestrator | Friday 17 April 2026 06:19:23 +0000 (0:00:00.262) 0:24:26.414 ********** 2026-04-17 06:19:30.546553 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:30.546565 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:30.546577 | orchestrator | 2026-04-17 06:19:30.546589 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:19:30.546668 | orchestrator | Friday 17 April 2026 06:19:23 +0000 (0:00:00.291) 0:24:26.706 ********** 2026-04-17 06:19:30.546681 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:30.546693 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:30.546704 | orchestrator | 2026-04-17 06:19:30.546714 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:19:30.546725 | orchestrator | Friday 17 April 2026 06:19:24 +0000 (0:00:00.275) 0:24:26.981 ********** 2026-04-17 06:19:30.546735 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:30.546746 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:30.546757 | orchestrator | 2026-04-17 06:19:30.546767 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:19:30.546778 | orchestrator | Friday 17 April 2026 06:19:24 +0000 (0:00:00.266) 0:24:27.248 ********** 2026-04-17 06:19:30.546789 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:19:30.546799 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:19:30.546810 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:19:30.546821 | orchestrator | 2026-04-17 06:19:30.546831 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:19:30.546842 | orchestrator | Friday 17 April 2026 06:19:25 +0000 (0:00:01.305) 0:24:28.553 ********** 2026-04-17 06:19:30.546852 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:30.546863 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:30.546873 | orchestrator | 2026-04-17 06:19:30.546884 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:19:30.546894 | orchestrator | Friday 17 April 2026 06:19:26 +0000 (0:00:00.412) 0:24:28.965 ********** 2026-04-17 06:19:30.546905 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:19:30.546915 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:19:30.546926 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:19:30.546936 | orchestrator | 2026-04-17 06:19:30.546947 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:19:30.546957 | orchestrator | Friday 17 April 2026 06:19:29 +0000 (0:00:02.825) 0:24:31.791 ********** 2026-04-17 06:19:30.546968 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 06:19:30.546980 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 06:19:30.546991 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 06:19:30.547002 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:30.547013 | orchestrator | 2026-04-17 06:19:30.547023 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:19:30.547034 | orchestrator | Friday 17 April 2026 06:19:29 +0000 (0:00:00.467) 0:24:32.258 ********** 2026-04-17 06:19:30.547067 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:19:30.547082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:19:30.547093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:19:30.547110 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:30.547122 | orchestrator | 2026-04-17 06:19:30.547132 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:19:30.547151 | orchestrator | Friday 17 April 2026 06:19:30 +0000 (0:00:00.694) 0:24:32.953 ********** 2026-04-17 06:19:30.547164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:30.547177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:30.547188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:30.547199 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:30.547210 | orchestrator | 2026-04-17 06:19:30.547221 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:19:30.547232 | orchestrator | Friday 17 April 2026 06:19:30 +0000 (0:00:00.199) 0:24:33.153 ********** 2026-04-17 06:19:30.547245 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:19:27.276282', 'end': '2026-04-17 06:19:27.324982', 'delta': '0:00:00.048700', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:19:30.547259 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:19:27.822060', 'end': '2026-04-17 06:19:27.858184', 'delta': '0:00:00.036124', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:19:30.547280 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:19:28.410769', 'end': '2026-04-17 06:19:28.461296', 'delta': '0:00:00.050527', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:19:36.374084 | orchestrator | 2026-04-17 06:19:36.374175 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:19:36.374186 | orchestrator | Friday 17 April 2026 06:19:30 +0000 (0:00:00.223) 0:24:33.376 ********** 2026-04-17 06:19:36.374206 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:36.374213 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:36.374220 | orchestrator | 2026-04-17 06:19:36.374226 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:19:36.374233 | orchestrator | Friday 17 April 2026 06:19:31 +0000 (0:00:00.442) 0:24:33.819 ********** 2026-04-17 06:19:36.374239 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:36.374247 | orchestrator | 2026-04-17 06:19:36.374254 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:19:36.374260 | orchestrator | Friday 17 April 2026 06:19:31 +0000 (0:00:00.307) 0:24:34.127 ********** 2026-04-17 06:19:36.374267 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:36.374273 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:36.374279 | orchestrator | 2026-04-17 06:19:36.374286 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:19:36.374292 | orchestrator | Friday 17 April 2026 06:19:31 +0000 (0:00:00.297) 0:24:34.424 ********** 2026-04-17 06:19:36.374298 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:19:36.374305 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:19:36.374311 | orchestrator | 2026-04-17 06:19:36.374318 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:19:36.374324 | orchestrator | Friday 17 April 2026 06:19:32 +0000 (0:00:01.150) 0:24:35.575 ********** 2026-04-17 06:19:36.374330 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:36.374336 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:36.374342 | orchestrator | 2026-04-17 06:19:36.374348 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:19:36.374355 | orchestrator | Friday 17 April 2026 06:19:33 +0000 (0:00:00.269) 0:24:35.845 ********** 2026-04-17 06:19:36.374361 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:36.374367 | orchestrator | 2026-04-17 06:19:36.374373 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:19:36.374379 | orchestrator | Friday 17 April 2026 06:19:33 +0000 (0:00:00.565) 0:24:36.411 ********** 2026-04-17 06:19:36.374385 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:36.374392 | orchestrator | 2026-04-17 06:19:36.374398 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:19:36.374404 | orchestrator | Friday 17 April 2026 06:19:33 +0000 (0:00:00.288) 0:24:36.699 ********** 2026-04-17 06:19:36.374410 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:36.374417 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:36.374423 | orchestrator | 2026-04-17 06:19:36.374429 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:19:36.374436 | orchestrator | Friday 17 April 2026 06:19:34 +0000 (0:00:00.241) 0:24:36.940 ********** 2026-04-17 06:19:36.374442 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:36.374448 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:36.374454 | orchestrator | 2026-04-17 06:19:36.374460 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:19:36.374467 | orchestrator | Friday 17 April 2026 06:19:34 +0000 (0:00:00.228) 0:24:37.169 ********** 2026-04-17 06:19:36.374473 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:36.374479 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:36.374485 | orchestrator | 2026-04-17 06:19:36.374491 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:19:36.374497 | orchestrator | Friday 17 April 2026 06:19:34 +0000 (0:00:00.284) 0:24:37.453 ********** 2026-04-17 06:19:36.374504 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:36.374510 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:36.374516 | orchestrator | 2026-04-17 06:19:36.374539 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:19:36.374545 | orchestrator | Friday 17 April 2026 06:19:34 +0000 (0:00:00.227) 0:24:37.681 ********** 2026-04-17 06:19:36.374551 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:36.374558 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:36.374564 | orchestrator | 2026-04-17 06:19:36.374570 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:19:36.374577 | orchestrator | Friday 17 April 2026 06:19:35 +0000 (0:00:00.274) 0:24:37.955 ********** 2026-04-17 06:19:36.374584 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:36.374592 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:36.374599 | orchestrator | 2026-04-17 06:19:36.374629 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:19:36.374637 | orchestrator | Friday 17 April 2026 06:19:35 +0000 (0:00:00.688) 0:24:38.643 ********** 2026-04-17 06:19:36.374644 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:36.374651 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:36.374659 | orchestrator | 2026-04-17 06:19:36.374666 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:19:36.374673 | orchestrator | Friday 17 April 2026 06:19:36 +0000 (0:00:00.291) 0:24:38.935 ********** 2026-04-17 06:19:36.374683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.374713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'uuids': ['7145b7e9-237d-4eff-af62-82cfb643a183'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS']}})  2026-04-17 06:19:36.374724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ab95973', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:19:36.374733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f']}})  2026-04-17 06:19:36.374741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.374754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.374763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:19:36.374771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.374787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ', 'dm-uuid-CRYPT-LUKS2-9b48552cb2fb461da2ba0698b00ea049-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:19:36.487751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.487845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'uuids': ['9b48552c-b2fb-461d-a2ba-0698b00ea049'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ']}})  2026-04-17 06:19:36.487862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d']}})  2026-04-17 06:19:36.487902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.487953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b9d69c97', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:19:36.487968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.487980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.487992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'uuids': ['7b3e98f1-7f68-4c04-9bb1-a0fd9b3252da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p']}})  2026-04-17 06:19:36.488011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.488023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS', 'dm-uuid-CRYPT-LUKS2-7145b7e9237d4effaf6282cfb643a183-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:19:36.488036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c054ea69', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:19:36.488049 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:36.488076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2']}})  2026-04-17 06:19:36.617422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.617494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.617517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:19:36.617524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.617528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px', 'dm-uuid-CRYPT-LUKS2-0eb8d7ab97d34aa3a4f06ee9564e4391-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:19:36.617532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.617537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'uuids': ['0eb8d7ab-97d3-4aa3-a4f0-6ee9564e4391'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px']}})  2026-04-17 06:19:36.617564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08']}})  2026-04-17 06:19:36.617569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.617579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fc59f804', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:19:36.617584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.617588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:19:36.617598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p', 'dm-uuid-CRYPT-LUKS2-7b3e98f17f684c049bb1a0fd9b3252da-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:19:36.961656 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:36.961758 | orchestrator | 2026-04-17 06:19:36.961774 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:19:36.961787 | orchestrator | Friday 17 April 2026 06:19:36 +0000 (0:00:00.553) 0:24:39.488 ********** 2026-04-17 06:19:36.961826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961842 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'uuids': ['7145b7e9-237d-4eff-af62-82cfb643a183'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ab95973', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961923 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.961990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ', 'dm-uuid-CRYPT-LUKS2-9b48552cb2fb461da2ba0698b00ea049-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:36.962014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'uuids': ['7b3e98f1-7f68-4c04-9bb1-a0fd9b3252da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.034897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.034990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c054ea69', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.035005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'uuids': ['9b48552c-b2fb-461d-a2ba-0698b00ea049'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.035032 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.035065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.035096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.035108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.035127 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b9d69c97', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.035155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.151962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px', 'dm-uuid-CRYPT-LUKS2-0eb8d7ab97d34aa3a4f06ee9564e4391-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS', 'dm-uuid-CRYPT-LUKS2-7145b7e9237d4effaf6282cfb643a183-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152232 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:37.152246 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'uuids': ['0eb8d7ab-97d3-4aa3-a4f0-6ee9564e4391'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:37.152307 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fc59f804', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:47.410365 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:47.410486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:47.410526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p', 'dm-uuid-CRYPT-LUKS2-7b3e98f17f684c049bb1a0fd9b3252da-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:19:47.410578 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:47.410601 | orchestrator | 2026-04-17 06:19:47.410680 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:19:47.410703 | orchestrator | Friday 17 April 2026 06:19:37 +0000 (0:00:00.592) 0:24:40.080 ********** 2026-04-17 06:19:47.410720 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:47.410738 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:47.410757 | orchestrator | 2026-04-17 06:19:47.410776 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:19:47.410793 | orchestrator | Friday 17 April 2026 06:19:37 +0000 (0:00:00.633) 0:24:40.714 ********** 2026-04-17 06:19:47.410811 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:47.410829 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:47.410848 | orchestrator | 2026-04-17 06:19:47.410865 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:19:47.410885 | orchestrator | Friday 17 April 2026 06:19:38 +0000 (0:00:00.264) 0:24:40.979 ********** 2026-04-17 06:19:47.410904 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:47.410924 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:47.410943 | orchestrator | 2026-04-17 06:19:47.410962 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:19:47.410978 | orchestrator | Friday 17 April 2026 06:19:38 +0000 (0:00:00.626) 0:24:41.606 ********** 2026-04-17 06:19:47.410991 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411004 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:47.411015 | orchestrator | 2026-04-17 06:19:47.411027 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:19:47.411040 | orchestrator | Friday 17 April 2026 06:19:39 +0000 (0:00:00.727) 0:24:42.333 ********** 2026-04-17 06:19:47.411052 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411064 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:47.411077 | orchestrator | 2026-04-17 06:19:47.411089 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:19:47.411101 | orchestrator | Friday 17 April 2026 06:19:40 +0000 (0:00:00.435) 0:24:42.769 ********** 2026-04-17 06:19:47.411115 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411127 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:47.411138 | orchestrator | 2026-04-17 06:19:47.411149 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:19:47.411160 | orchestrator | Friday 17 April 2026 06:19:40 +0000 (0:00:00.291) 0:24:43.060 ********** 2026-04-17 06:19:47.411170 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-17 06:19:47.411182 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-17 06:19:47.411192 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-17 06:19:47.411203 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-17 06:19:47.411213 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-17 06:19:47.411224 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-17 06:19:47.411234 | orchestrator | 2026-04-17 06:19:47.411245 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:19:47.411255 | orchestrator | Friday 17 April 2026 06:19:41 +0000 (0:00:00.901) 0:24:43.962 ********** 2026-04-17 06:19:47.411298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 06:19:47.411311 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 06:19:47.411335 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 06:19:47.411346 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 06:19:47.411368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 06:19:47.411379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 06:19:47.411389 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:47.411400 | orchestrator | 2026-04-17 06:19:47.411411 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:19:47.411422 | orchestrator | Friday 17 April 2026 06:19:41 +0000 (0:00:00.295) 0:24:44.257 ********** 2026-04-17 06:19:47.411433 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5, testbed-node-3 2026-04-17 06:19:47.411446 | orchestrator | 2026-04-17 06:19:47.411457 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:19:47.411469 | orchestrator | Friday 17 April 2026 06:19:42 +0000 (0:00:00.955) 0:24:45.212 ********** 2026-04-17 06:19:47.411480 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411491 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:47.411501 | orchestrator | 2026-04-17 06:19:47.411512 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:19:47.411522 | orchestrator | Friday 17 April 2026 06:19:42 +0000 (0:00:00.256) 0:24:45.469 ********** 2026-04-17 06:19:47.411533 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411544 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:47.411554 | orchestrator | 2026-04-17 06:19:47.411565 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:19:47.411576 | orchestrator | Friday 17 April 2026 06:19:42 +0000 (0:00:00.246) 0:24:45.715 ********** 2026-04-17 06:19:47.411586 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411597 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:19:47.411608 | orchestrator | 2026-04-17 06:19:47.411652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:19:47.411664 | orchestrator | Friday 17 April 2026 06:19:43 +0000 (0:00:00.270) 0:24:45.986 ********** 2026-04-17 06:19:47.411684 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:47.411695 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:47.411706 | orchestrator | 2026-04-17 06:19:47.411716 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:19:47.411727 | orchestrator | Friday 17 April 2026 06:19:43 +0000 (0:00:00.376) 0:24:46.362 ********** 2026-04-17 06:19:47.411738 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:19:47.411749 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:19:47.411759 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:19:47.411770 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411780 | orchestrator | 2026-04-17 06:19:47.411791 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:19:47.411801 | orchestrator | Friday 17 April 2026 06:19:44 +0000 (0:00:00.396) 0:24:46.759 ********** 2026-04-17 06:19:47.411812 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:19:47.411823 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:19:47.411834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:19:47.411845 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411855 | orchestrator | 2026-04-17 06:19:47.411866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:19:47.411877 | orchestrator | Friday 17 April 2026 06:19:44 +0000 (0:00:00.407) 0:24:47.166 ********** 2026-04-17 06:19:47.411887 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:19:47.411898 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:19:47.411917 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:19:47.411928 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:19:47.411938 | orchestrator | 2026-04-17 06:19:47.411950 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:19:47.411968 | orchestrator | Friday 17 April 2026 06:19:45 +0000 (0:00:00.925) 0:24:48.092 ********** 2026-04-17 06:19:47.411986 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:19:47.412006 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:19:47.412024 | orchestrator | 2026-04-17 06:19:47.412042 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:19:47.412061 | orchestrator | Friday 17 April 2026 06:19:46 +0000 (0:00:00.769) 0:24:48.862 ********** 2026-04-17 06:19:47.412079 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 06:19:47.412098 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 06:19:47.412117 | orchestrator | 2026-04-17 06:19:47.412129 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:19:47.412140 | orchestrator | Friday 17 April 2026 06:19:46 +0000 (0:00:00.473) 0:24:49.335 ********** 2026-04-17 06:19:47.412150 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:19:47.412161 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:19:47.412171 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:19:47.412182 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:19:47.412192 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:19:47.412203 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-17 06:19:47.412223 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:20:02.434338 | orchestrator | 2026-04-17 06:20:02.434443 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:20:02.434456 | orchestrator | Friday 17 April 2026 06:19:47 +0000 (0:00:00.913) 0:24:50.249 ********** 2026-04-17 06:20:02.434466 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:20:02.434476 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:20:02.434484 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:20:02.434493 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:20:02.434502 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:20:02.434511 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-17 06:20:02.434520 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:20:02.434529 | orchestrator | 2026-04-17 06:20:02.434537 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-04-17 06:20:02.434545 | orchestrator | Friday 17 April 2026 06:19:49 +0000 (0:00:02.025) 0:24:52.275 ********** 2026-04-17 06:20:02.434554 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.434563 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.434572 | orchestrator | 2026-04-17 06:20:02.434580 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:20:02.434588 | orchestrator | Friday 17 April 2026 06:19:49 +0000 (0:00:00.236) 0:24:52.511 ********** 2026-04-17 06:20:02.434596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5, testbed-node-3 2026-04-17 06:20:02.434605 | orchestrator | 2026-04-17 06:20:02.434613 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:20:02.434621 | orchestrator | Friday 17 April 2026 06:19:50 +0000 (0:00:00.402) 0:24:52.914 ********** 2026-04-17 06:20:02.434701 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5, testbed-node-3 2026-04-17 06:20:02.434711 | orchestrator | 2026-04-17 06:20:02.434734 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:20:02.434742 | orchestrator | Friday 17 April 2026 06:19:51 +0000 (0:00:00.898) 0:24:53.813 ********** 2026-04-17 06:20:02.434750 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.434759 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.434767 | orchestrator | 2026-04-17 06:20:02.434775 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:20:02.434784 | orchestrator | Friday 17 April 2026 06:19:51 +0000 (0:00:00.241) 0:24:54.054 ********** 2026-04-17 06:20:02.434792 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.434800 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.434809 | orchestrator | 2026-04-17 06:20:02.434817 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:20:02.434825 | orchestrator | Friday 17 April 2026 06:19:51 +0000 (0:00:00.672) 0:24:54.727 ********** 2026-04-17 06:20:02.434834 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.434842 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.434850 | orchestrator | 2026-04-17 06:20:02.434858 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:20:02.434867 | orchestrator | Friday 17 April 2026 06:19:52 +0000 (0:00:00.669) 0:24:55.397 ********** 2026-04-17 06:20:02.434875 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.434883 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.434891 | orchestrator | 2026-04-17 06:20:02.434900 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:20:02.434908 | orchestrator | Friday 17 April 2026 06:19:53 +0000 (0:00:00.672) 0:24:56.070 ********** 2026-04-17 06:20:02.434916 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.434924 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.434933 | orchestrator | 2026-04-17 06:20:02.434941 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:20:02.434949 | orchestrator | Friday 17 April 2026 06:19:53 +0000 (0:00:00.272) 0:24:56.342 ********** 2026-04-17 06:20:02.434957 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.434966 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.434974 | orchestrator | 2026-04-17 06:20:02.434983 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:20:02.434991 | orchestrator | Friday 17 April 2026 06:19:54 +0000 (0:00:00.713) 0:24:57.055 ********** 2026-04-17 06:20:02.434999 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435007 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435016 | orchestrator | 2026-04-17 06:20:02.435024 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:20:02.435032 | orchestrator | Friday 17 April 2026 06:19:54 +0000 (0:00:00.263) 0:24:57.319 ********** 2026-04-17 06:20:02.435040 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.435048 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.435057 | orchestrator | 2026-04-17 06:20:02.435065 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:20:02.435073 | orchestrator | Friday 17 April 2026 06:19:55 +0000 (0:00:00.651) 0:24:57.971 ********** 2026-04-17 06:20:02.435081 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.435089 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.435097 | orchestrator | 2026-04-17 06:20:02.435105 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:20:02.435114 | orchestrator | Friday 17 April 2026 06:19:55 +0000 (0:00:00.696) 0:24:58.667 ********** 2026-04-17 06:20:02.435122 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435129 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435137 | orchestrator | 2026-04-17 06:20:02.435146 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:20:02.435159 | orchestrator | Friday 17 April 2026 06:19:56 +0000 (0:00:00.269) 0:24:58.937 ********** 2026-04-17 06:20:02.435167 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435188 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435196 | orchestrator | 2026-04-17 06:20:02.435203 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:20:02.435211 | orchestrator | Friday 17 April 2026 06:19:56 +0000 (0:00:00.237) 0:24:59.175 ********** 2026-04-17 06:20:02.435218 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.435226 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.435233 | orchestrator | 2026-04-17 06:20:02.435241 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:20:02.435248 | orchestrator | Friday 17 April 2026 06:19:57 +0000 (0:00:00.757) 0:24:59.932 ********** 2026-04-17 06:20:02.435256 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.435264 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.435271 | orchestrator | 2026-04-17 06:20:02.435279 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:20:02.435286 | orchestrator | Friday 17 April 2026 06:19:57 +0000 (0:00:00.296) 0:25:00.229 ********** 2026-04-17 06:20:02.435293 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.435301 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.435308 | orchestrator | 2026-04-17 06:20:02.435316 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:20:02.435323 | orchestrator | Friday 17 April 2026 06:19:57 +0000 (0:00:00.255) 0:25:00.484 ********** 2026-04-17 06:20:02.435331 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435338 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435346 | orchestrator | 2026-04-17 06:20:02.435353 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:20:02.435361 | orchestrator | Friday 17 April 2026 06:19:57 +0000 (0:00:00.245) 0:25:00.730 ********** 2026-04-17 06:20:02.435369 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435376 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435384 | orchestrator | 2026-04-17 06:20:02.435391 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:20:02.435399 | orchestrator | Friday 17 April 2026 06:19:58 +0000 (0:00:00.281) 0:25:01.011 ********** 2026-04-17 06:20:02.435406 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435413 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435420 | orchestrator | 2026-04-17 06:20:02.435427 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:20:02.435434 | orchestrator | Friday 17 April 2026 06:19:58 +0000 (0:00:00.254) 0:25:01.266 ********** 2026-04-17 06:20:02.435445 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.435453 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.435460 | orchestrator | 2026-04-17 06:20:02.435468 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:20:02.435476 | orchestrator | Friday 17 April 2026 06:19:58 +0000 (0:00:00.282) 0:25:01.548 ********** 2026-04-17 06:20:02.435483 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:02.435491 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:02.435498 | orchestrator | 2026-04-17 06:20:02.435506 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:20:02.435514 | orchestrator | Friday 17 April 2026 06:19:59 +0000 (0:00:00.865) 0:25:02.414 ********** 2026-04-17 06:20:02.435521 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435529 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435537 | orchestrator | 2026-04-17 06:20:02.435544 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:20:02.435552 | orchestrator | Friday 17 April 2026 06:19:59 +0000 (0:00:00.259) 0:25:02.673 ********** 2026-04-17 06:20:02.435560 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435567 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435575 | orchestrator | 2026-04-17 06:20:02.435582 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:20:02.435595 | orchestrator | Friday 17 April 2026 06:20:00 +0000 (0:00:00.261) 0:25:02.935 ********** 2026-04-17 06:20:02.435603 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435611 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435618 | orchestrator | 2026-04-17 06:20:02.435626 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:20:02.435681 | orchestrator | Friday 17 April 2026 06:20:00 +0000 (0:00:00.288) 0:25:03.223 ********** 2026-04-17 06:20:02.435689 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435697 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435704 | orchestrator | 2026-04-17 06:20:02.435712 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:20:02.435719 | orchestrator | Friday 17 April 2026 06:20:00 +0000 (0:00:00.259) 0:25:03.483 ********** 2026-04-17 06:20:02.435726 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435733 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435740 | orchestrator | 2026-04-17 06:20:02.435747 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:20:02.435754 | orchestrator | Friday 17 April 2026 06:20:00 +0000 (0:00:00.219) 0:25:03.703 ********** 2026-04-17 06:20:02.435761 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435768 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435775 | orchestrator | 2026-04-17 06:20:02.435783 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:20:02.435790 | orchestrator | Friday 17 April 2026 06:20:01 +0000 (0:00:00.693) 0:25:04.396 ********** 2026-04-17 06:20:02.435797 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435805 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435812 | orchestrator | 2026-04-17 06:20:02.435819 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:20:02.435825 | orchestrator | Friday 17 April 2026 06:20:01 +0000 (0:00:00.225) 0:25:04.622 ********** 2026-04-17 06:20:02.435833 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435841 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435847 | orchestrator | 2026-04-17 06:20:02.435854 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:20:02.435861 | orchestrator | Friday 17 April 2026 06:20:02 +0000 (0:00:00.286) 0:25:04.908 ********** 2026-04-17 06:20:02.435867 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:02.435874 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:02.435881 | orchestrator | 2026-04-17 06:20:02.435896 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:20:18.243223 | orchestrator | Friday 17 April 2026 06:20:02 +0000 (0:00:00.260) 0:25:05.169 ********** 2026-04-17 06:20:18.243363 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.243391 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.243411 | orchestrator | 2026-04-17 06:20:18.243432 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:20:18.243450 | orchestrator | Friday 17 April 2026 06:20:02 +0000 (0:00:00.281) 0:25:05.450 ********** 2026-04-17 06:20:18.243468 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.243486 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.243505 | orchestrator | 2026-04-17 06:20:18.243524 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:20:18.243543 | orchestrator | Friday 17 April 2026 06:20:02 +0000 (0:00:00.260) 0:25:05.711 ********** 2026-04-17 06:20:18.243563 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.243581 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.243601 | orchestrator | 2026-04-17 06:20:18.243612 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:20:18.243623 | orchestrator | Friday 17 April 2026 06:20:03 +0000 (0:00:00.840) 0:25:06.552 ********** 2026-04-17 06:20:18.243634 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:18.243733 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:18.243746 | orchestrator | 2026-04-17 06:20:18.243757 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:20:18.243768 | orchestrator | Friday 17 April 2026 06:20:04 +0000 (0:00:01.042) 0:25:07.595 ********** 2026-04-17 06:20:18.243779 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:18.243790 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:18.243800 | orchestrator | 2026-04-17 06:20:18.243811 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:20:18.243822 | orchestrator | Friday 17 April 2026 06:20:06 +0000 (0:00:01.351) 0:25:08.946 ********** 2026-04-17 06:20:18.243834 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5, testbed-node-3 2026-04-17 06:20:18.243845 | orchestrator | 2026-04-17 06:20:18.243855 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:20:18.243866 | orchestrator | Friday 17 April 2026 06:20:06 +0000 (0:00:00.414) 0:25:09.360 ********** 2026-04-17 06:20:18.243877 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.243903 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.243914 | orchestrator | 2026-04-17 06:20:18.243925 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:20:18.243935 | orchestrator | Friday 17 April 2026 06:20:06 +0000 (0:00:00.254) 0:25:09.615 ********** 2026-04-17 06:20:18.243946 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.243957 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.243968 | orchestrator | 2026-04-17 06:20:18.243978 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:20:18.243989 | orchestrator | Friday 17 April 2026 06:20:07 +0000 (0:00:00.720) 0:25:10.336 ********** 2026-04-17 06:20:18.244000 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:20:18.244010 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:20:18.244021 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:20:18.244032 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:20:18.244042 | orchestrator | 2026-04-17 06:20:18.244053 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:20:18.244064 | orchestrator | Friday 17 April 2026 06:20:08 +0000 (0:00:00.948) 0:25:11.284 ********** 2026-04-17 06:20:18.244074 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:18.244085 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:18.244095 | orchestrator | 2026-04-17 06:20:18.244106 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:20:18.244117 | orchestrator | Friday 17 April 2026 06:20:09 +0000 (0:00:00.594) 0:25:11.878 ********** 2026-04-17 06:20:18.244127 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244138 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.244149 | orchestrator | 2026-04-17 06:20:18.244159 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:20:18.244170 | orchestrator | Friday 17 April 2026 06:20:09 +0000 (0:00:00.277) 0:25:12.155 ********** 2026-04-17 06:20:18.244181 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244192 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.244203 | orchestrator | 2026-04-17 06:20:18.244213 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:20:18.244224 | orchestrator | Friday 17 April 2026 06:20:09 +0000 (0:00:00.273) 0:25:12.429 ********** 2026-04-17 06:20:18.244235 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244245 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.244256 | orchestrator | 2026-04-17 06:20:18.244267 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:20:18.244277 | orchestrator | Friday 17 April 2026 06:20:09 +0000 (0:00:00.251) 0:25:12.681 ********** 2026-04-17 06:20:18.244296 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5, testbed-node-3 2026-04-17 06:20:18.244311 | orchestrator | 2026-04-17 06:20:18.244330 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:20:18.244348 | orchestrator | Friday 17 April 2026 06:20:10 +0000 (0:00:00.879) 0:25:13.560 ********** 2026-04-17 06:20:18.244366 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:18.244383 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:18.244399 | orchestrator | 2026-04-17 06:20:18.244416 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:20:18.244434 | orchestrator | Friday 17 April 2026 06:20:11 +0000 (0:00:00.847) 0:25:14.407 ********** 2026-04-17 06:20:18.244451 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:20:18.244494 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:20:18.244515 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:20:18.244532 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244550 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:20:18.244562 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:20:18.244572 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:20:18.244583 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.244593 | orchestrator | 2026-04-17 06:20:18.244604 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:20:18.244614 | orchestrator | Friday 17 April 2026 06:20:11 +0000 (0:00:00.268) 0:25:14.676 ********** 2026-04-17 06:20:18.244625 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244636 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.244671 | orchestrator | 2026-04-17 06:20:18.244682 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:20:18.244693 | orchestrator | Friday 17 April 2026 06:20:12 +0000 (0:00:00.248) 0:25:14.924 ********** 2026-04-17 06:20:18.244703 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244714 | orchestrator | 2026-04-17 06:20:18.244725 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:20:18.244735 | orchestrator | Friday 17 April 2026 06:20:12 +0000 (0:00:00.187) 0:25:15.112 ********** 2026-04-17 06:20:18.244746 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244757 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.244768 | orchestrator | 2026-04-17 06:20:18.244778 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:20:18.244788 | orchestrator | Friday 17 April 2026 06:20:12 +0000 (0:00:00.269) 0:25:15.381 ********** 2026-04-17 06:20:18.244799 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244810 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.244820 | orchestrator | 2026-04-17 06:20:18.244831 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:20:18.244842 | orchestrator | Friday 17 April 2026 06:20:12 +0000 (0:00:00.292) 0:25:15.674 ********** 2026-04-17 06:20:18.244860 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.244871 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.244882 | orchestrator | 2026-04-17 06:20:18.244893 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:20:18.244903 | orchestrator | Friday 17 April 2026 06:20:13 +0000 (0:00:00.642) 0:25:16.316 ********** 2026-04-17 06:20:18.244914 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:18.244925 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:18.244935 | orchestrator | 2026-04-17 06:20:18.244946 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:20:18.244956 | orchestrator | Friday 17 April 2026 06:20:15 +0000 (0:00:01.606) 0:25:17.923 ********** 2026-04-17 06:20:18.244967 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:18.244988 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:18.244998 | orchestrator | 2026-04-17 06:20:18.245009 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:20:18.245020 | orchestrator | Friday 17 April 2026 06:20:15 +0000 (0:00:00.259) 0:25:18.182 ********** 2026-04-17 06:20:18.245030 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5, testbed-node-3 2026-04-17 06:20:18.245042 | orchestrator | 2026-04-17 06:20:18.245053 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:20:18.245063 | orchestrator | Friday 17 April 2026 06:20:15 +0000 (0:00:00.434) 0:25:18.617 ********** 2026-04-17 06:20:18.245074 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.245085 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.245095 | orchestrator | 2026-04-17 06:20:18.245106 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:20:18.245117 | orchestrator | Friday 17 April 2026 06:20:16 +0000 (0:00:00.253) 0:25:18.871 ********** 2026-04-17 06:20:18.245127 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.245138 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.245148 | orchestrator | 2026-04-17 06:20:18.245159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:20:18.245184 | orchestrator | Friday 17 April 2026 06:20:16 +0000 (0:00:00.697) 0:25:19.569 ********** 2026-04-17 06:20:18.245207 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.245218 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.245228 | orchestrator | 2026-04-17 06:20:18.245239 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:20:18.245250 | orchestrator | Friday 17 April 2026 06:20:17 +0000 (0:00:00.290) 0:25:19.859 ********** 2026-04-17 06:20:18.245261 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.245280 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.245298 | orchestrator | 2026-04-17 06:20:18.245318 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:20:18.245336 | orchestrator | Friday 17 April 2026 06:20:17 +0000 (0:00:00.268) 0:25:20.127 ********** 2026-04-17 06:20:18.245353 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.245371 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.245389 | orchestrator | 2026-04-17 06:20:18.245408 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:20:18.245429 | orchestrator | Friday 17 April 2026 06:20:17 +0000 (0:00:00.296) 0:25:20.424 ********** 2026-04-17 06:20:18.245449 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.245469 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.245488 | orchestrator | 2026-04-17 06:20:18.245499 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:20:18.245510 | orchestrator | Friday 17 April 2026 06:20:17 +0000 (0:00:00.268) 0:25:20.693 ********** 2026-04-17 06:20:18.245520 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:18.245531 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:18.245541 | orchestrator | 2026-04-17 06:20:18.245562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:20:37.998857 | orchestrator | Friday 17 April 2026 06:20:18 +0000 (0:00:00.286) 0:25:20.979 ********** 2026-04-17 06:20:37.999000 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:37.999031 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:37.999050 | orchestrator | 2026-04-17 06:20:37.999071 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:20:37.999090 | orchestrator | Friday 17 April 2026 06:20:18 +0000 (0:00:00.283) 0:25:21.263 ********** 2026-04-17 06:20:37.999109 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:20:37.999129 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:20:37.999147 | orchestrator | 2026-04-17 06:20:37.999165 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:20:37.999184 | orchestrator | Friday 17 April 2026 06:20:19 +0000 (0:00:00.875) 0:25:22.138 ********** 2026-04-17 06:20:37.999236 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5, testbed-node-3 2026-04-17 06:20:37.999258 | orchestrator | 2026-04-17 06:20:37.999278 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:20:37.999297 | orchestrator | Friday 17 April 2026 06:20:19 +0000 (0:00:00.436) 0:25:22.575 ********** 2026-04-17 06:20:37.999315 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-17 06:20:37.999333 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-17 06:20:37.999352 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-17 06:20:37.999371 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-17 06:20:37.999388 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-17 06:20:37.999407 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-17 06:20:37.999425 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-17 06:20:37.999442 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-17 06:20:37.999461 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-17 06:20:37.999480 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-17 06:20:37.999498 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-17 06:20:37.999515 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-17 06:20:37.999551 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-17 06:20:37.999569 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-17 06:20:37.999587 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:20:37.999605 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:20:37.999623 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:20:37.999641 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:20:37.999659 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:20:37.999707 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:20:37.999726 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:20:37.999745 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:20:37.999763 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:20:37.999782 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:20:37.999800 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:20:37.999819 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:20:37.999837 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:20:37.999854 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:20:37.999872 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-17 06:20:37.999890 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-17 06:20:37.999910 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-17 06:20:37.999929 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-17 06:20:37.999946 | orchestrator | 2026-04-17 06:20:37.999964 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:20:37.999981 | orchestrator | Friday 17 April 2026 06:20:25 +0000 (0:00:05.685) 0:25:28.260 ********** 2026-04-17 06:20:37.999999 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5, testbed-node-3 2026-04-17 06:20:38.000018 | orchestrator | 2026-04-17 06:20:38.000037 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 06:20:38.000056 | orchestrator | Friday 17 April 2026 06:20:26 +0000 (0:00:00.830) 0:25:29.091 ********** 2026-04-17 06:20:38.000078 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:20:38.000116 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:20:38.000135 | orchestrator | 2026-04-17 06:20:38.000155 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 06:20:38.000175 | orchestrator | Friday 17 April 2026 06:20:26 +0000 (0:00:00.625) 0:25:29.717 ********** 2026-04-17 06:20:38.000194 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:20:38.000215 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:20:38.000235 | orchestrator | 2026-04-17 06:20:38.000248 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:20:38.000282 | orchestrator | Friday 17 April 2026 06:20:28 +0000 (0:00:01.195) 0:25:30.912 ********** 2026-04-17 06:20:38.000294 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.000305 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.000316 | orchestrator | 2026-04-17 06:20:38.000326 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:20:38.000337 | orchestrator | Friday 17 April 2026 06:20:28 +0000 (0:00:00.276) 0:25:31.189 ********** 2026-04-17 06:20:38.000348 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.000358 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.000369 | orchestrator | 2026-04-17 06:20:38.000379 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:20:38.000390 | orchestrator | Friday 17 April 2026 06:20:28 +0000 (0:00:00.251) 0:25:31.441 ********** 2026-04-17 06:20:38.000407 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.000425 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.000443 | orchestrator | 2026-04-17 06:20:38.000461 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:20:38.000478 | orchestrator | Friday 17 April 2026 06:20:28 +0000 (0:00:00.262) 0:25:31.704 ********** 2026-04-17 06:20:38.000497 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.000516 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.000535 | orchestrator | 2026-04-17 06:20:38.000554 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:20:38.000573 | orchestrator | Friday 17 April 2026 06:20:29 +0000 (0:00:00.245) 0:25:31.950 ********** 2026-04-17 06:20:38.000592 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.000610 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.000628 | orchestrator | 2026-04-17 06:20:38.000646 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:20:38.000692 | orchestrator | Friday 17 April 2026 06:20:29 +0000 (0:00:00.695) 0:25:32.645 ********** 2026-04-17 06:20:38.000711 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.000729 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.000748 | orchestrator | 2026-04-17 06:20:38.000766 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:20:38.000785 | orchestrator | Friday 17 April 2026 06:20:30 +0000 (0:00:00.253) 0:25:32.899 ********** 2026-04-17 06:20:38.000817 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.000837 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.000856 | orchestrator | 2026-04-17 06:20:38.000873 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:20:38.000891 | orchestrator | Friday 17 April 2026 06:20:30 +0000 (0:00:00.244) 0:25:33.143 ********** 2026-04-17 06:20:38.000910 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.000929 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.000949 | orchestrator | 2026-04-17 06:20:38.000967 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:20:38.001001 | orchestrator | Friday 17 April 2026 06:20:30 +0000 (0:00:00.258) 0:25:33.401 ********** 2026-04-17 06:20:38.001020 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.001040 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.001058 | orchestrator | 2026-04-17 06:20:38.001077 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:20:38.001095 | orchestrator | Friday 17 April 2026 06:20:30 +0000 (0:00:00.253) 0:25:33.655 ********** 2026-04-17 06:20:38.001115 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.001134 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.001153 | orchestrator | 2026-04-17 06:20:38.001171 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:20:38.001189 | orchestrator | Friday 17 April 2026 06:20:31 +0000 (0:00:00.244) 0:25:33.899 ********** 2026-04-17 06:20:38.001208 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:20:38.001227 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:20:38.001245 | orchestrator | 2026-04-17 06:20:38.001264 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:20:38.001282 | orchestrator | Friday 17 April 2026 06:20:31 +0000 (0:00:00.276) 0:25:34.176 ********** 2026-04-17 06:20:38.001301 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:20:38.001320 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:20:38.001339 | orchestrator | 2026-04-17 06:20:38.001357 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:20:38.001376 | orchestrator | Friday 17 April 2026 06:20:35 +0000 (0:00:04.178) 0:25:38.354 ********** 2026-04-17 06:20:38.001394 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:20:38.001413 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:20:38.001431 | orchestrator | 2026-04-17 06:20:38.001450 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:20:38.001469 | orchestrator | Friday 17 April 2026 06:20:35 +0000 (0:00:00.307) 0:25:38.662 ********** 2026-04-17 06:20:38.001491 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-17 06:20:38.001530 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-17 06:21:03.421436 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-17 06:21:03.421550 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-17 06:21:03.421567 | orchestrator | 2026-04-17 06:21:03.421581 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:21:03.421594 | orchestrator | Friday 17 April 2026 06:20:39 +0000 (0:00:04.061) 0:25:42.724 ********** 2026-04-17 06:21:03.421628 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.421642 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:03.421652 | orchestrator | 2026-04-17 06:21:03.421664 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:21:03.421675 | orchestrator | Friday 17 April 2026 06:20:40 +0000 (0:00:00.310) 0:25:43.034 ********** 2026-04-17 06:21:03.421685 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.421733 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:03.421744 | orchestrator | 2026-04-17 06:21:03.421756 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:21:03.421768 | orchestrator | Friday 17 April 2026 06:20:40 +0000 (0:00:00.257) 0:25:43.292 ********** 2026-04-17 06:21:03.421779 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.421805 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:03.421817 | orchestrator | 2026-04-17 06:21:03.421828 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:21:03.421839 | orchestrator | Friday 17 April 2026 06:20:40 +0000 (0:00:00.243) 0:25:43.536 ********** 2026-04-17 06:21:03.421850 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.421861 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:03.421872 | orchestrator | 2026-04-17 06:21:03.421883 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:21:03.421894 | orchestrator | Friday 17 April 2026 06:20:41 +0000 (0:00:00.723) 0:25:44.259 ********** 2026-04-17 06:21:03.421905 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.421916 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:03.421927 | orchestrator | 2026-04-17 06:21:03.421937 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:21:03.421948 | orchestrator | Friday 17 April 2026 06:20:41 +0000 (0:00:00.290) 0:25:44.549 ********** 2026-04-17 06:21:03.421959 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.421972 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.421984 | orchestrator | 2026-04-17 06:21:03.421997 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:21:03.422010 | orchestrator | Friday 17 April 2026 06:20:42 +0000 (0:00:00.410) 0:25:44.959 ********** 2026-04-17 06:21:03.422082 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:21:03.422095 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:21:03.422108 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:21:03.422121 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.422134 | orchestrator | 2026-04-17 06:21:03.422153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:21:03.422172 | orchestrator | Friday 17 April 2026 06:20:42 +0000 (0:00:00.488) 0:25:45.448 ********** 2026-04-17 06:21:03.422189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:21:03.422209 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:21:03.422229 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:21:03.422248 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.422266 | orchestrator | 2026-04-17 06:21:03.422279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:21:03.422292 | orchestrator | Friday 17 April 2026 06:20:43 +0000 (0:00:00.486) 0:25:45.935 ********** 2026-04-17 06:21:03.422305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:21:03.422318 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:21:03.422329 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:21:03.422340 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.422351 | orchestrator | 2026-04-17 06:21:03.422362 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:21:03.422372 | orchestrator | Friday 17 April 2026 06:20:43 +0000 (0:00:00.473) 0:25:46.408 ********** 2026-04-17 06:21:03.422393 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.422404 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.422415 | orchestrator | 2026-04-17 06:21:03.422425 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:21:03.422436 | orchestrator | Friday 17 April 2026 06:20:43 +0000 (0:00:00.282) 0:25:46.691 ********** 2026-04-17 06:21:03.422446 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 06:21:03.422457 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 06:21:03.422468 | orchestrator | 2026-04-17 06:21:03.422479 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:21:03.422489 | orchestrator | Friday 17 April 2026 06:20:45 +0000 (0:00:01.146) 0:25:47.837 ********** 2026-04-17 06:21:03.422500 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.422511 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.422521 | orchestrator | 2026-04-17 06:21:03.422552 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-17 06:21:03.422563 | orchestrator | Friday 17 April 2026 06:20:46 +0000 (0:00:01.015) 0:25:48.853 ********** 2026-04-17 06:21:03.422574 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.422585 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:03.422595 | orchestrator | 2026-04-17 06:21:03.422606 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-17 06:21:03.422616 | orchestrator | Friday 17 April 2026 06:20:46 +0000 (0:00:00.260) 0:25:49.114 ********** 2026-04-17 06:21:03.422627 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5, testbed-node-3 2026-04-17 06:21:03.422639 | orchestrator | 2026-04-17 06:21:03.422649 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-17 06:21:03.422660 | orchestrator | Friday 17 April 2026 06:20:46 +0000 (0:00:00.385) 0:25:49.499 ********** 2026-04-17 06:21:03.422671 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 06:21:03.422681 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 06:21:03.422714 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-17 06:21:03.422725 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-17 06:21:03.422736 | orchestrator | 2026-04-17 06:21:03.422746 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-17 06:21:03.422757 | orchestrator | Friday 17 April 2026 06:20:47 +0000 (0:00:00.936) 0:25:50.436 ********** 2026-04-17 06:21:03.422768 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:21:03.422778 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 06:21:03.422789 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:21:03.422800 | orchestrator | 2026-04-17 06:21:03.422810 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:21:03.422821 | orchestrator | Friday 17 April 2026 06:20:50 +0000 (0:00:02.660) 0:25:53.096 ********** 2026-04-17 06:21:03.422838 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-17 06:21:03.422850 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 06:21:03.422861 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.422871 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-17 06:21:03.422882 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 06:21:03.422893 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.422904 | orchestrator | 2026-04-17 06:21:03.422914 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-17 06:21:03.422925 | orchestrator | Friday 17 April 2026 06:20:51 +0000 (0:00:01.553) 0:25:54.650 ********** 2026-04-17 06:21:03.422936 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.422947 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.422958 | orchestrator | 2026-04-17 06:21:03.422969 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-17 06:21:03.422986 | orchestrator | Friday 17 April 2026 06:20:52 +0000 (0:00:00.643) 0:25:55.294 ********** 2026-04-17 06:21:03.422997 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.423008 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:03.423018 | orchestrator | 2026-04-17 06:21:03.423029 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-17 06:21:03.423040 | orchestrator | Friday 17 April 2026 06:20:52 +0000 (0:00:00.245) 0:25:55.539 ********** 2026-04-17 06:21:03.423051 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5, testbed-node-3 2026-04-17 06:21:03.423062 | orchestrator | 2026-04-17 06:21:03.423072 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-17 06:21:03.423083 | orchestrator | Friday 17 April 2026 06:20:53 +0000 (0:00:00.395) 0:25:55.934 ********** 2026-04-17 06:21:03.423094 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5, testbed-node-3 2026-04-17 06:21:03.423104 | orchestrator | 2026-04-17 06:21:03.423115 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-17 06:21:03.423125 | orchestrator | Friday 17 April 2026 06:20:53 +0000 (0:00:00.746) 0:25:56.681 ********** 2026-04-17 06:21:03.423136 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.423147 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.423158 | orchestrator | 2026-04-17 06:21:03.423169 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-17 06:21:03.423179 | orchestrator | Friday 17 April 2026 06:20:55 +0000 (0:00:01.175) 0:25:57.857 ********** 2026-04-17 06:21:03.423190 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.423201 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.423211 | orchestrator | 2026-04-17 06:21:03.423222 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-17 06:21:03.423233 | orchestrator | Friday 17 April 2026 06:20:56 +0000 (0:00:01.061) 0:25:58.918 ********** 2026-04-17 06:21:03.423244 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.423254 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.423265 | orchestrator | 2026-04-17 06:21:03.423275 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-17 06:21:03.423286 | orchestrator | Friday 17 April 2026 06:20:57 +0000 (0:00:01.351) 0:26:00.269 ********** 2026-04-17 06:21:03.423302 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:21:03.423338 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:21:03.423366 | orchestrator | 2026-04-17 06:21:03.423385 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-17 06:21:03.423402 | orchestrator | Friday 17 April 2026 06:20:59 +0000 (0:00:02.460) 0:26:02.730 ********** 2026-04-17 06:21:03.423419 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:21:03.423435 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:03.423450 | orchestrator | 2026-04-17 06:21:03.423469 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-04-17 06:21:03.423489 | orchestrator | Friday 17 April 2026 06:21:00 +0000 (0:00:00.860) 0:26:03.590 ********** 2026-04-17 06:21:03.423508 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:21:03.423536 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:21:11.507397 | orchestrator | 2026-04-17 06:21:11.507522 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-17 06:21:11.507540 | orchestrator | 2026-04-17 06:21:11.507553 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:21:11.507565 | orchestrator | Friday 17 April 2026 06:21:04 +0000 (0:00:03.167) 0:26:06.758 ********** 2026-04-17 06:21:11.507576 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-17 06:21:11.507587 | orchestrator | 2026-04-17 06:21:11.507597 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:21:11.507608 | orchestrator | Friday 17 April 2026 06:21:04 +0000 (0:00:00.263) 0:26:07.021 ********** 2026-04-17 06:21:11.507619 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:11.507654 | orchestrator | 2026-04-17 06:21:11.507665 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:21:11.507676 | orchestrator | Friday 17 April 2026 06:21:04 +0000 (0:00:00.445) 0:26:07.466 ********** 2026-04-17 06:21:11.507687 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:11.507750 | orchestrator | 2026-04-17 06:21:11.507763 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:21:11.507774 | orchestrator | Friday 17 April 2026 06:21:04 +0000 (0:00:00.161) 0:26:07.628 ********** 2026-04-17 06:21:11.507785 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:11.507795 | orchestrator | 2026-04-17 06:21:11.507806 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:21:11.507817 | orchestrator | Friday 17 April 2026 06:21:05 +0000 (0:00:00.446) 0:26:08.074 ********** 2026-04-17 06:21:11.507828 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:11.507838 | orchestrator | 2026-04-17 06:21:11.507849 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:21:11.507860 | orchestrator | Friday 17 April 2026 06:21:05 +0000 (0:00:00.175) 0:26:08.250 ********** 2026-04-17 06:21:11.507870 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:11.507881 | orchestrator | 2026-04-17 06:21:11.507892 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:21:11.507917 | orchestrator | Friday 17 April 2026 06:21:05 +0000 (0:00:00.145) 0:26:08.396 ********** 2026-04-17 06:21:11.507928 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:11.507939 | orchestrator | 2026-04-17 06:21:11.507949 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:21:11.507961 | orchestrator | Friday 17 April 2026 06:21:05 +0000 (0:00:00.173) 0:26:08.569 ********** 2026-04-17 06:21:11.507972 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:11.507983 | orchestrator | 2026-04-17 06:21:11.507994 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:21:11.508005 | orchestrator | Friday 17 April 2026 06:21:05 +0000 (0:00:00.162) 0:26:08.732 ********** 2026-04-17 06:21:11.508015 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:11.508026 | orchestrator | 2026-04-17 06:21:11.508037 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:21:11.508048 | orchestrator | Friday 17 April 2026 06:21:06 +0000 (0:00:00.135) 0:26:08.867 ********** 2026-04-17 06:21:11.508059 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:21:11.508070 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:21:11.508080 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:21:11.508091 | orchestrator | 2026-04-17 06:21:11.508102 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:21:11.508112 | orchestrator | Friday 17 April 2026 06:21:07 +0000 (0:00:01.518) 0:26:10.386 ********** 2026-04-17 06:21:11.508123 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:11.508134 | orchestrator | 2026-04-17 06:21:11.508144 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:21:11.508155 | orchestrator | Friday 17 April 2026 06:21:07 +0000 (0:00:00.294) 0:26:10.680 ********** 2026-04-17 06:21:11.508165 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:21:11.508176 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:21:11.508187 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:21:11.508197 | orchestrator | 2026-04-17 06:21:11.508208 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:21:11.508219 | orchestrator | Friday 17 April 2026 06:21:09 +0000 (0:00:02.006) 0:26:12.686 ********** 2026-04-17 06:21:11.508230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 06:21:11.508241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 06:21:11.508260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 06:21:11.508270 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:11.508281 | orchestrator | 2026-04-17 06:21:11.508292 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:21:11.508303 | orchestrator | Friday 17 April 2026 06:21:10 +0000 (0:00:00.479) 0:26:13.166 ********** 2026-04-17 06:21:11.508316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:21:11.508330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:21:11.508361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:21:11.508373 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:11.508384 | orchestrator | 2026-04-17 06:21:11.508395 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:21:11.508405 | orchestrator | Friday 17 April 2026 06:21:11 +0000 (0:00:00.667) 0:26:13.833 ********** 2026-04-17 06:21:11.508419 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:11.508432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:11.508449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:11.508460 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:11.508471 | orchestrator | 2026-04-17 06:21:11.508482 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:21:11.508492 | orchestrator | Friday 17 April 2026 06:21:11 +0000 (0:00:00.177) 0:26:14.010 ********** 2026-04-17 06:21:11.508505 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:21:08.546427', 'end': '2026-04-17 06:21:08.596244', 'delta': '0:00:00.049817', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:21:11.508519 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:21:09.118699', 'end': '2026-04-17 06:21:09.174915', 'delta': '0:00:00.056216', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:21:11.508538 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:21:09.730004', 'end': '2026-04-17 06:21:09.790426', 'delta': '0:00:00.060422', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:21:11.508549 | orchestrator | 2026-04-17 06:21:11.508567 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:21:15.914990 | orchestrator | Friday 17 April 2026 06:21:11 +0000 (0:00:00.238) 0:26:14.249 ********** 2026-04-17 06:21:15.915096 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:15.915112 | orchestrator | 2026-04-17 06:21:15.915125 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:21:15.915136 | orchestrator | Friday 17 April 2026 06:21:11 +0000 (0:00:00.282) 0:26:14.531 ********** 2026-04-17 06:21:15.915147 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:15.915159 | orchestrator | 2026-04-17 06:21:15.915170 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:21:15.915181 | orchestrator | Friday 17 April 2026 06:21:12 +0000 (0:00:00.277) 0:26:14.809 ********** 2026-04-17 06:21:15.915191 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:15.915202 | orchestrator | 2026-04-17 06:21:15.915213 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:21:15.915223 | orchestrator | Friday 17 April 2026 06:21:12 +0000 (0:00:00.140) 0:26:14.950 ********** 2026-04-17 06:21:15.915234 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:21:15.915245 | orchestrator | 2026-04-17 06:21:15.915256 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:21:15.915266 | orchestrator | Friday 17 April 2026 06:21:13 +0000 (0:00:00.993) 0:26:15.943 ********** 2026-04-17 06:21:15.915277 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:15.915287 | orchestrator | 2026-04-17 06:21:15.915298 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:21:15.915309 | orchestrator | Friday 17 April 2026 06:21:13 +0000 (0:00:00.147) 0:26:16.091 ********** 2026-04-17 06:21:15.915319 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:15.915330 | orchestrator | 2026-04-17 06:21:15.915340 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:21:15.915351 | orchestrator | Friday 17 April 2026 06:21:13 +0000 (0:00:00.129) 0:26:16.221 ********** 2026-04-17 06:21:15.915361 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:15.915372 | orchestrator | 2026-04-17 06:21:15.915399 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:21:15.915410 | orchestrator | Friday 17 April 2026 06:21:14 +0000 (0:00:01.094) 0:26:17.316 ********** 2026-04-17 06:21:15.915420 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:15.915454 | orchestrator | 2026-04-17 06:21:15.915466 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:21:15.915476 | orchestrator | Friday 17 April 2026 06:21:14 +0000 (0:00:00.145) 0:26:17.462 ********** 2026-04-17 06:21:15.915487 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:15.915498 | orchestrator | 2026-04-17 06:21:15.915508 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:21:15.915519 | orchestrator | Friday 17 April 2026 06:21:14 +0000 (0:00:00.130) 0:26:17.593 ********** 2026-04-17 06:21:15.915530 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:15.915543 | orchestrator | 2026-04-17 06:21:15.915556 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:21:15.915568 | orchestrator | Friday 17 April 2026 06:21:15 +0000 (0:00:00.183) 0:26:17.776 ********** 2026-04-17 06:21:15.915580 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:15.915592 | orchestrator | 2026-04-17 06:21:15.915605 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:21:15.915617 | orchestrator | Friday 17 April 2026 06:21:15 +0000 (0:00:00.138) 0:26:17.915 ********** 2026-04-17 06:21:15.915628 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:15.915641 | orchestrator | 2026-04-17 06:21:15.915652 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:21:15.915665 | orchestrator | Friday 17 April 2026 06:21:15 +0000 (0:00:00.190) 0:26:18.105 ********** 2026-04-17 06:21:15.915676 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:15.915688 | orchestrator | 2026-04-17 06:21:15.915725 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:21:15.915738 | orchestrator | Friday 17 April 2026 06:21:15 +0000 (0:00:00.155) 0:26:18.261 ********** 2026-04-17 06:21:15.915751 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:15.915763 | orchestrator | 2026-04-17 06:21:15.915774 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:21:15.915787 | orchestrator | Friday 17 April 2026 06:21:15 +0000 (0:00:00.194) 0:26:18.455 ********** 2026-04-17 06:21:15.915801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:21:15.915819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'uuids': ['7b3e98f1-7f68-4c04-9bb1-a0fd9b3252da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p']}})  2026-04-17 06:21:15.915853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c054ea69', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:21:15.915875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2']}})  2026-04-17 06:21:15.915897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:21:15.915910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:21:15.915922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:21:15.915934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:21:15.915945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px', 'dm-uuid-CRYPT-LUKS2-0eb8d7ab97d34aa3a4f06ee9564e4391-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:21:15.915964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:21:16.237801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'uuids': ['0eb8d7ab-97d3-4aa3-a4f0-6ee9564e4391'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px']}})  2026-04-17 06:21:16.237945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08']}})  2026-04-17 06:21:16.237963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:21:16.237982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fc59f804', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:21:16.238077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:21:16.238103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:21:16.238121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p', 'dm-uuid-CRYPT-LUKS2-7b3e98f17f684c049bb1a0fd9b3252da-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:21:16.238134 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:16.238147 | orchestrator | 2026-04-17 06:21:16.238159 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:21:16.238171 | orchestrator | Friday 17 April 2026 06:21:16 +0000 (0:00:00.388) 0:26:18.843 ********** 2026-04-17 06:21:16.238183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.238196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08', 'dm-uuid-LVM-8KUqJZnaSXCdwbEyOdNIcS8KXTeaG1sfrn6m4Y9stAdpS94vZKB2EBG86l0U0N4p'], 'uuids': ['7b3e98f1-7f68-4c04-9bb1-a0fd9b3252da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.238207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b', 'scsi-SQEMU_QEMU_HARDDISK_c054ea69-870b-4e6c-a28f-b4f3aaa6484b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c054ea69', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.238227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Utq4Xt-Rjwf-dPK7-fH2h-hZQO-NBTn-XnR4Jw', 'scsi-0QEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c', 'scsi-SQEMU_QEMU_HARDDISK_243e8c65-8f34-4fed-aca0-50c577764c9c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368588 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368772 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px', 'dm-uuid-CRYPT-LUKS2-0eb8d7ab97d34aa3a4f06ee9564e4391-3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ba7178ba--163b--58b0--89b4--3a73c9468ec2-osd--block--ba7178ba--163b--58b0--89b4--3a73c9468ec2', 'dm-uuid-LVM-RQm1Ybyz1MnRkIZMCdyk2jWpzCjob99V3FKefFlp3pUBqVNqyGMG0pf0VgJ2z9Px'], 'uuids': ['0eb8d7ab-97d3-4aa3-a4f0-6ee9564e4391'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '243e8c65', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3FKefF-lp3p-UBqV-NqyG-MG0p-f0Vg-J2z9Px']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-N3OqWn-FfLl-oUlV-iDHB-xCLH-taE9-pGSVp8', 'scsi-0QEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098', 'scsi-SQEMU_QEMU_HARDDISK_348c4a49-80d1-4817-b52d-126919837098'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '348c4a49', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--34b96a2b--74e9--5d3b--a409--9327cdd3ba08-osd--block--34b96a2b--74e9--5d3b--a409--9327cdd3ba08']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:16.368904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fc59f804', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc59f804-1091-4440-a733-689672c4390d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:27.123041 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:27.123175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:27.123195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p', 'dm-uuid-CRYPT-LUKS2-7b3e98f17f684c049bb1a0fd9b3252da-rn6m4Y-9stA-dpS9-4vZK-B2EB-G86l-0U0N4p'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:21:27.123212 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.123228 | orchestrator | 2026-04-17 06:21:27.123243 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:21:27.123285 | orchestrator | Friday 17 April 2026 06:21:16 +0000 (0:00:00.442) 0:26:19.286 ********** 2026-04-17 06:21:27.123300 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:27.123314 | orchestrator | 2026-04-17 06:21:27.123328 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:21:27.123341 | orchestrator | Friday 17 April 2026 06:21:17 +0000 (0:00:00.512) 0:26:19.799 ********** 2026-04-17 06:21:27.123354 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:27.123367 | orchestrator | 2026-04-17 06:21:27.123380 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:21:27.123394 | orchestrator | Friday 17 April 2026 06:21:17 +0000 (0:00:00.147) 0:26:19.946 ********** 2026-04-17 06:21:27.123407 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:27.123420 | orchestrator | 2026-04-17 06:21:27.123433 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:21:27.123447 | orchestrator | Friday 17 April 2026 06:21:17 +0000 (0:00:00.535) 0:26:20.482 ********** 2026-04-17 06:21:27.123460 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.123473 | orchestrator | 2026-04-17 06:21:27.123486 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:21:27.123499 | orchestrator | Friday 17 April 2026 06:21:18 +0000 (0:00:00.557) 0:26:21.039 ********** 2026-04-17 06:21:27.123512 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.123525 | orchestrator | 2026-04-17 06:21:27.123539 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:21:27.123553 | orchestrator | Friday 17 April 2026 06:21:18 +0000 (0:00:00.302) 0:26:21.342 ********** 2026-04-17 06:21:27.123567 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.123581 | orchestrator | 2026-04-17 06:21:27.123595 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:21:27.123609 | orchestrator | Friday 17 April 2026 06:21:18 +0000 (0:00:00.163) 0:26:21.506 ********** 2026-04-17 06:21:27.123623 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-17 06:21:27.123638 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-17 06:21:27.123652 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-17 06:21:27.123665 | orchestrator | 2026-04-17 06:21:27.123679 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:21:27.123693 | orchestrator | Friday 17 April 2026 06:21:19 +0000 (0:00:00.699) 0:26:22.206 ********** 2026-04-17 06:21:27.123708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 06:21:27.123746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 06:21:27.123776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 06:21:27.123790 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.123803 | orchestrator | 2026-04-17 06:21:27.123818 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:21:27.123832 | orchestrator | Friday 17 April 2026 06:21:19 +0000 (0:00:00.195) 0:26:22.401 ********** 2026-04-17 06:21:27.123865 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-17 06:21:27.123881 | orchestrator | 2026-04-17 06:21:27.123896 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:21:27.123911 | orchestrator | Friday 17 April 2026 06:21:19 +0000 (0:00:00.250) 0:26:22.652 ********** 2026-04-17 06:21:27.123924 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.123938 | orchestrator | 2026-04-17 06:21:27.123950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:21:27.123964 | orchestrator | Friday 17 April 2026 06:21:20 +0000 (0:00:00.183) 0:26:22.835 ********** 2026-04-17 06:21:27.123977 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.123989 | orchestrator | 2026-04-17 06:21:27.124003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:21:27.124016 | orchestrator | Friday 17 April 2026 06:21:20 +0000 (0:00:00.176) 0:26:23.011 ********** 2026-04-17 06:21:27.124039 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.124053 | orchestrator | 2026-04-17 06:21:27.124067 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:21:27.124080 | orchestrator | Friday 17 April 2026 06:21:20 +0000 (0:00:00.166) 0:26:23.178 ********** 2026-04-17 06:21:27.124093 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:27.124106 | orchestrator | 2026-04-17 06:21:27.124120 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:21:27.124133 | orchestrator | Friday 17 April 2026 06:21:20 +0000 (0:00:00.287) 0:26:23.465 ********** 2026-04-17 06:21:27.124147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:21:27.124160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:21:27.124174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:21:27.124187 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.124201 | orchestrator | 2026-04-17 06:21:27.124214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:21:27.124227 | orchestrator | Friday 17 April 2026 06:21:21 +0000 (0:00:00.883) 0:26:24.348 ********** 2026-04-17 06:21:27.124240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:21:27.124253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:21:27.124266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:21:27.124280 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.124293 | orchestrator | 2026-04-17 06:21:27.124306 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:21:27.124319 | orchestrator | Friday 17 April 2026 06:21:22 +0000 (0:00:00.893) 0:26:25.242 ********** 2026-04-17 06:21:27.124332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:21:27.124346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:21:27.124359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:21:27.124372 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:27.124385 | orchestrator | 2026-04-17 06:21:27.124398 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:21:27.124411 | orchestrator | Friday 17 April 2026 06:21:23 +0000 (0:00:01.373) 0:26:26.616 ********** 2026-04-17 06:21:27.124425 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:27.124438 | orchestrator | 2026-04-17 06:21:27.124452 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:21:27.124465 | orchestrator | Friday 17 April 2026 06:21:24 +0000 (0:00:00.189) 0:26:26.806 ********** 2026-04-17 06:21:27.124478 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 06:21:27.124492 | orchestrator | 2026-04-17 06:21:27.124505 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:21:27.124518 | orchestrator | Friday 17 April 2026 06:21:24 +0000 (0:00:00.370) 0:26:27.176 ********** 2026-04-17 06:21:27.124531 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:21:27.124544 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:21:27.124558 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:21:27.124571 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 06:21:27.124585 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:21:27.124597 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:21:27.124610 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:21:27.124624 | orchestrator | 2026-04-17 06:21:27.124637 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:21:27.124659 | orchestrator | Friday 17 April 2026 06:21:25 +0000 (0:00:00.835) 0:26:28.012 ********** 2026-04-17 06:21:27.124672 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:21:27.124685 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:21:27.124699 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:21:27.124733 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 06:21:27.124754 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:21:27.124768 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:21:27.124782 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:21:27.124795 | orchestrator | 2026-04-17 06:21:27.124816 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-17 06:21:42.846805 | orchestrator | Friday 17 April 2026 06:21:27 +0000 (0:00:01.846) 0:26:29.858 ********** 2026-04-17 06:21:42.846925 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:21:42.846941 | orchestrator | 2026-04-17 06:21:42.846953 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-17 06:21:42.846965 | orchestrator | Friday 17 April 2026 06:21:28 +0000 (0:00:01.367) 0:26:31.226 ********** 2026-04-17 06:21:42.846977 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:21:42.846990 | orchestrator | 2026-04-17 06:21:42.847001 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-17 06:21:42.847012 | orchestrator | Friday 17 April 2026 06:21:30 +0000 (0:00:01.921) 0:26:33.147 ********** 2026-04-17 06:21:42.847023 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:21:42.847034 | orchestrator | 2026-04-17 06:21:42.847044 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:21:42.847055 | orchestrator | Friday 17 April 2026 06:21:31 +0000 (0:00:01.287) 0:26:34.435 ********** 2026-04-17 06:21:42.847066 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-17 06:21:42.847077 | orchestrator | 2026-04-17 06:21:42.847091 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:21:42.847109 | orchestrator | Friday 17 April 2026 06:21:31 +0000 (0:00:00.200) 0:26:34.635 ********** 2026-04-17 06:21:42.847129 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-17 06:21:42.847151 | orchestrator | 2026-04-17 06:21:42.847169 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:21:42.847188 | orchestrator | Friday 17 April 2026 06:21:32 +0000 (0:00:00.219) 0:26:34.855 ********** 2026-04-17 06:21:42.847208 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.847227 | orchestrator | 2026-04-17 06:21:42.847248 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:21:42.847267 | orchestrator | Friday 17 April 2026 06:21:32 +0000 (0:00:00.603) 0:26:35.458 ********** 2026-04-17 06:21:42.847285 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.847304 | orchestrator | 2026-04-17 06:21:42.847317 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:21:42.847330 | orchestrator | Friday 17 April 2026 06:21:33 +0000 (0:00:00.524) 0:26:35.983 ********** 2026-04-17 06:21:42.847341 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.847353 | orchestrator | 2026-04-17 06:21:42.847366 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:21:42.847377 | orchestrator | Friday 17 April 2026 06:21:33 +0000 (0:00:00.599) 0:26:36.583 ********** 2026-04-17 06:21:42.847389 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.847401 | orchestrator | 2026-04-17 06:21:42.847439 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:21:42.847452 | orchestrator | Friday 17 April 2026 06:21:34 +0000 (0:00:00.534) 0:26:37.117 ********** 2026-04-17 06:21:42.847465 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.847477 | orchestrator | 2026-04-17 06:21:42.847490 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:21:42.847502 | orchestrator | Friday 17 April 2026 06:21:34 +0000 (0:00:00.153) 0:26:37.270 ********** 2026-04-17 06:21:42.847515 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.847527 | orchestrator | 2026-04-17 06:21:42.847539 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:21:42.847551 | orchestrator | Friday 17 April 2026 06:21:34 +0000 (0:00:00.161) 0:26:37.432 ********** 2026-04-17 06:21:42.847564 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.847575 | orchestrator | 2026-04-17 06:21:42.847587 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:21:42.847600 | orchestrator | Friday 17 April 2026 06:21:34 +0000 (0:00:00.173) 0:26:37.606 ********** 2026-04-17 06:21:42.847612 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.847624 | orchestrator | 2026-04-17 06:21:42.847638 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:21:42.847650 | orchestrator | Friday 17 April 2026 06:21:35 +0000 (0:00:00.541) 0:26:38.148 ********** 2026-04-17 06:21:42.847662 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.847672 | orchestrator | 2026-04-17 06:21:42.847683 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:21:42.847693 | orchestrator | Friday 17 April 2026 06:21:35 +0000 (0:00:00.595) 0:26:38.744 ********** 2026-04-17 06:21:42.847704 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.847715 | orchestrator | 2026-04-17 06:21:42.847725 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:21:42.847770 | orchestrator | Friday 17 April 2026 06:21:36 +0000 (0:00:00.138) 0:26:38.882 ********** 2026-04-17 06:21:42.847781 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.847792 | orchestrator | 2026-04-17 06:21:42.847802 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:21:42.847813 | orchestrator | Friday 17 April 2026 06:21:36 +0000 (0:00:00.172) 0:26:39.054 ********** 2026-04-17 06:21:42.847824 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.847834 | orchestrator | 2026-04-17 06:21:42.847845 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:21:42.847855 | orchestrator | Friday 17 April 2026 06:21:36 +0000 (0:00:00.165) 0:26:39.220 ********** 2026-04-17 06:21:42.847866 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.847877 | orchestrator | 2026-04-17 06:21:42.847902 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:21:42.847913 | orchestrator | Friday 17 April 2026 06:21:36 +0000 (0:00:00.168) 0:26:39.388 ********** 2026-04-17 06:21:42.847924 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.847935 | orchestrator | 2026-04-17 06:21:42.847963 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:21:42.847975 | orchestrator | Friday 17 April 2026 06:21:36 +0000 (0:00:00.159) 0:26:39.548 ********** 2026-04-17 06:21:42.847986 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.847996 | orchestrator | 2026-04-17 06:21:42.848007 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:21:42.848017 | orchestrator | Friday 17 April 2026 06:21:37 +0000 (0:00:00.537) 0:26:40.085 ********** 2026-04-17 06:21:42.848028 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848038 | orchestrator | 2026-04-17 06:21:42.848049 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:21:42.848059 | orchestrator | Friday 17 April 2026 06:21:37 +0000 (0:00:00.158) 0:26:40.244 ********** 2026-04-17 06:21:42.848070 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848081 | orchestrator | 2026-04-17 06:21:42.848103 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:21:42.848114 | orchestrator | Friday 17 April 2026 06:21:37 +0000 (0:00:00.154) 0:26:40.398 ********** 2026-04-17 06:21:42.848124 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.848135 | orchestrator | 2026-04-17 06:21:42.848146 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:21:42.848156 | orchestrator | Friday 17 April 2026 06:21:37 +0000 (0:00:00.191) 0:26:40.590 ********** 2026-04-17 06:21:42.848167 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.848177 | orchestrator | 2026-04-17 06:21:42.848188 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:21:42.848198 | orchestrator | Friday 17 April 2026 06:21:38 +0000 (0:00:00.269) 0:26:40.859 ********** 2026-04-17 06:21:42.848209 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848219 | orchestrator | 2026-04-17 06:21:42.848230 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:21:42.848240 | orchestrator | Friday 17 April 2026 06:21:38 +0000 (0:00:00.156) 0:26:41.016 ********** 2026-04-17 06:21:42.848251 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848261 | orchestrator | 2026-04-17 06:21:42.848272 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:21:42.848282 | orchestrator | Friday 17 April 2026 06:21:38 +0000 (0:00:00.132) 0:26:41.148 ********** 2026-04-17 06:21:42.848293 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848303 | orchestrator | 2026-04-17 06:21:42.848314 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:21:42.848324 | orchestrator | Friday 17 April 2026 06:21:38 +0000 (0:00:00.160) 0:26:41.308 ********** 2026-04-17 06:21:42.848335 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848345 | orchestrator | 2026-04-17 06:21:42.848370 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:21:42.848393 | orchestrator | Friday 17 April 2026 06:21:38 +0000 (0:00:00.160) 0:26:41.469 ********** 2026-04-17 06:21:42.848404 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848415 | orchestrator | 2026-04-17 06:21:42.848425 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:21:42.848436 | orchestrator | Friday 17 April 2026 06:21:38 +0000 (0:00:00.158) 0:26:41.628 ********** 2026-04-17 06:21:42.848446 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848457 | orchestrator | 2026-04-17 06:21:42.848467 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:21:42.848478 | orchestrator | Friday 17 April 2026 06:21:39 +0000 (0:00:00.135) 0:26:41.763 ********** 2026-04-17 06:21:42.848489 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848499 | orchestrator | 2026-04-17 06:21:42.848510 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:21:42.848521 | orchestrator | Friday 17 April 2026 06:21:39 +0000 (0:00:00.127) 0:26:41.891 ********** 2026-04-17 06:21:42.848532 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848543 | orchestrator | 2026-04-17 06:21:42.848553 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:21:42.848564 | orchestrator | Friday 17 April 2026 06:21:39 +0000 (0:00:00.592) 0:26:42.484 ********** 2026-04-17 06:21:42.848574 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848585 | orchestrator | 2026-04-17 06:21:42.848595 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:21:42.848606 | orchestrator | Friday 17 April 2026 06:21:39 +0000 (0:00:00.144) 0:26:42.629 ********** 2026-04-17 06:21:42.848616 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848627 | orchestrator | 2026-04-17 06:21:42.848638 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:21:42.848648 | orchestrator | Friday 17 April 2026 06:21:40 +0000 (0:00:00.129) 0:26:42.758 ********** 2026-04-17 06:21:42.848659 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848677 | orchestrator | 2026-04-17 06:21:42.848687 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:21:42.848698 | orchestrator | Friday 17 April 2026 06:21:40 +0000 (0:00:00.131) 0:26:42.890 ********** 2026-04-17 06:21:42.848708 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:42.848719 | orchestrator | 2026-04-17 06:21:42.848750 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:21:42.848761 | orchestrator | Friday 17 April 2026 06:21:40 +0000 (0:00:00.273) 0:26:43.164 ********** 2026-04-17 06:21:42.848772 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.848783 | orchestrator | 2026-04-17 06:21:42.848793 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:21:42.848804 | orchestrator | Friday 17 April 2026 06:21:41 +0000 (0:00:00.958) 0:26:44.122 ********** 2026-04-17 06:21:42.848814 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:42.848824 | orchestrator | 2026-04-17 06:21:42.848835 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:21:42.848851 | orchestrator | Friday 17 April 2026 06:21:42 +0000 (0:00:01.227) 0:26:45.350 ********** 2026-04-17 06:21:42.848862 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-17 06:21:42.848873 | orchestrator | 2026-04-17 06:21:42.848884 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:21:42.848902 | orchestrator | Friday 17 April 2026 06:21:42 +0000 (0:00:00.233) 0:26:45.583 ********** 2026-04-17 06:21:59.345952 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346131 | orchestrator | 2026-04-17 06:21:59.346151 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:21:59.346165 | orchestrator | Friday 17 April 2026 06:21:43 +0000 (0:00:00.169) 0:26:45.752 ********** 2026-04-17 06:21:59.346176 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346187 | orchestrator | 2026-04-17 06:21:59.346198 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:21:59.346209 | orchestrator | Friday 17 April 2026 06:21:43 +0000 (0:00:00.157) 0:26:45.910 ********** 2026-04-17 06:21:59.346220 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:21:59.346231 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:21:59.346243 | orchestrator | 2026-04-17 06:21:59.346253 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:21:59.346264 | orchestrator | Friday 17 April 2026 06:21:43 +0000 (0:00:00.830) 0:26:46.741 ********** 2026-04-17 06:21:59.346275 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:59.346287 | orchestrator | 2026-04-17 06:21:59.346299 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:21:59.346309 | orchestrator | Friday 17 April 2026 06:21:44 +0000 (0:00:00.890) 0:26:47.632 ********** 2026-04-17 06:21:59.346320 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346331 | orchestrator | 2026-04-17 06:21:59.346342 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:21:59.346352 | orchestrator | Friday 17 April 2026 06:21:45 +0000 (0:00:00.174) 0:26:47.806 ********** 2026-04-17 06:21:59.346363 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346374 | orchestrator | 2026-04-17 06:21:59.346385 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:21:59.346395 | orchestrator | Friday 17 April 2026 06:21:45 +0000 (0:00:00.181) 0:26:47.988 ********** 2026-04-17 06:21:59.346406 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346417 | orchestrator | 2026-04-17 06:21:59.346428 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:21:59.346438 | orchestrator | Friday 17 April 2026 06:21:45 +0000 (0:00:00.132) 0:26:48.120 ********** 2026-04-17 06:21:59.346449 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-17 06:21:59.346461 | orchestrator | 2026-04-17 06:21:59.346497 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:21:59.346510 | orchestrator | Friday 17 April 2026 06:21:45 +0000 (0:00:00.253) 0:26:48.373 ********** 2026-04-17 06:21:59.346523 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:59.346536 | orchestrator | 2026-04-17 06:21:59.346548 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:21:59.346560 | orchestrator | Friday 17 April 2026 06:21:46 +0000 (0:00:00.732) 0:26:49.106 ********** 2026-04-17 06:21:59.346573 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:21:59.346586 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:21:59.346598 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:21:59.346610 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346623 | orchestrator | 2026-04-17 06:21:59.346635 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:21:59.346647 | orchestrator | Friday 17 April 2026 06:21:46 +0000 (0:00:00.164) 0:26:49.270 ********** 2026-04-17 06:21:59.346659 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346671 | orchestrator | 2026-04-17 06:21:59.346684 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:21:59.346697 | orchestrator | Friday 17 April 2026 06:21:46 +0000 (0:00:00.148) 0:26:49.419 ********** 2026-04-17 06:21:59.346709 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346722 | orchestrator | 2026-04-17 06:21:59.346734 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:21:59.346786 | orchestrator | Friday 17 April 2026 06:21:46 +0000 (0:00:00.176) 0:26:49.595 ********** 2026-04-17 06:21:59.346800 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346813 | orchestrator | 2026-04-17 06:21:59.346825 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:21:59.346836 | orchestrator | Friday 17 April 2026 06:21:47 +0000 (0:00:00.166) 0:26:49.761 ********** 2026-04-17 06:21:59.346846 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346857 | orchestrator | 2026-04-17 06:21:59.346868 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:21:59.346878 | orchestrator | Friday 17 April 2026 06:21:47 +0000 (0:00:00.150) 0:26:49.911 ********** 2026-04-17 06:21:59.346889 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.346900 | orchestrator | 2026-04-17 06:21:59.346910 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:21:59.346921 | orchestrator | Friday 17 April 2026 06:21:47 +0000 (0:00:00.189) 0:26:50.102 ********** 2026-04-17 06:21:59.346932 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:59.346942 | orchestrator | 2026-04-17 06:21:59.346953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:21:59.346963 | orchestrator | Friday 17 April 2026 06:21:48 +0000 (0:00:01.566) 0:26:51.668 ********** 2026-04-17 06:21:59.346974 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:59.346985 | orchestrator | 2026-04-17 06:21:59.347010 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:21:59.347021 | orchestrator | Friday 17 April 2026 06:21:49 +0000 (0:00:00.613) 0:26:52.282 ********** 2026-04-17 06:21:59.347031 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-17 06:21:59.347042 | orchestrator | 2026-04-17 06:21:59.347053 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:21:59.347082 | orchestrator | Friday 17 April 2026 06:21:49 +0000 (0:00:00.252) 0:26:52.535 ********** 2026-04-17 06:21:59.347094 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.347104 | orchestrator | 2026-04-17 06:21:59.347115 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:21:59.347126 | orchestrator | Friday 17 April 2026 06:21:49 +0000 (0:00:00.181) 0:26:52.716 ********** 2026-04-17 06:21:59.347145 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.347156 | orchestrator | 2026-04-17 06:21:59.347167 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:21:59.347178 | orchestrator | Friday 17 April 2026 06:21:50 +0000 (0:00:00.164) 0:26:52.881 ********** 2026-04-17 06:21:59.347188 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.347199 | orchestrator | 2026-04-17 06:21:59.347210 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:21:59.347220 | orchestrator | Friday 17 April 2026 06:21:50 +0000 (0:00:00.165) 0:26:53.046 ********** 2026-04-17 06:21:59.347231 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.347241 | orchestrator | 2026-04-17 06:21:59.347252 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:21:59.347263 | orchestrator | Friday 17 April 2026 06:21:50 +0000 (0:00:00.134) 0:26:53.181 ********** 2026-04-17 06:21:59.347273 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.347284 | orchestrator | 2026-04-17 06:21:59.347294 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:21:59.347305 | orchestrator | Friday 17 April 2026 06:21:50 +0000 (0:00:00.185) 0:26:53.366 ********** 2026-04-17 06:21:59.347315 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.347326 | orchestrator | 2026-04-17 06:21:59.347337 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:21:59.347347 | orchestrator | Friday 17 April 2026 06:21:50 +0000 (0:00:00.154) 0:26:53.521 ********** 2026-04-17 06:21:59.347358 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.347369 | orchestrator | 2026-04-17 06:21:59.347379 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:21:59.347390 | orchestrator | Friday 17 April 2026 06:21:50 +0000 (0:00:00.175) 0:26:53.696 ********** 2026-04-17 06:21:59.347400 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:21:59.347411 | orchestrator | 2026-04-17 06:21:59.347421 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:21:59.347432 | orchestrator | Friday 17 April 2026 06:21:51 +0000 (0:00:00.174) 0:26:53.871 ********** 2026-04-17 06:21:59.347443 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:21:59.347453 | orchestrator | 2026-04-17 06:21:59.347464 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:21:59.347474 | orchestrator | Friday 17 April 2026 06:21:51 +0000 (0:00:00.209) 0:26:54.080 ********** 2026-04-17 06:21:59.347485 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-17 06:21:59.347496 | orchestrator | 2026-04-17 06:21:59.347506 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:21:59.347517 | orchestrator | Friday 17 April 2026 06:21:51 +0000 (0:00:00.579) 0:26:54.660 ********** 2026-04-17 06:21:59.347527 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-17 06:21:59.347539 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-17 06:21:59.347549 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-17 06:21:59.347560 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-17 06:21:59.347570 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-17 06:21:59.347581 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-17 06:21:59.347591 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-17 06:21:59.347602 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:21:59.347613 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:21:59.347624 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:21:59.347635 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:21:59.347645 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:21:59.347656 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:21:59.347673 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:21:59.347683 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-17 06:21:59.347694 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-17 06:21:59.347705 | orchestrator | 2026-04-17 06:21:59.347716 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:21:59.347727 | orchestrator | Friday 17 April 2026 06:21:57 +0000 (0:00:05.698) 0:27:00.359 ********** 2026-04-17 06:21:59.347737 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-17 06:21:59.347769 | orchestrator | 2026-04-17 06:21:59.347781 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 06:21:59.347791 | orchestrator | Friday 17 April 2026 06:21:57 +0000 (0:00:00.209) 0:27:00.569 ********** 2026-04-17 06:21:59.347802 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:21:59.347813 | orchestrator | 2026-04-17 06:21:59.347824 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 06:21:59.347840 | orchestrator | Friday 17 April 2026 06:21:58 +0000 (0:00:00.538) 0:27:01.108 ********** 2026-04-17 06:21:59.347850 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:21:59.347861 | orchestrator | 2026-04-17 06:21:59.347871 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:21:59.347889 | orchestrator | Friday 17 April 2026 06:21:59 +0000 (0:00:00.971) 0:27:02.080 ********** 2026-04-17 06:22:18.708081 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708201 | orchestrator | 2026-04-17 06:22:18.708218 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:22:18.708231 | orchestrator | Friday 17 April 2026 06:21:59 +0000 (0:00:00.141) 0:27:02.221 ********** 2026-04-17 06:22:18.708242 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708253 | orchestrator | 2026-04-17 06:22:18.708265 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:22:18.708276 | orchestrator | Friday 17 April 2026 06:21:59 +0000 (0:00:00.141) 0:27:02.362 ********** 2026-04-17 06:22:18.708286 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708297 | orchestrator | 2026-04-17 06:22:18.708308 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:22:18.708319 | orchestrator | Friday 17 April 2026 06:21:59 +0000 (0:00:00.161) 0:27:02.524 ********** 2026-04-17 06:22:18.708329 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708340 | orchestrator | 2026-04-17 06:22:18.708351 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:22:18.708361 | orchestrator | Friday 17 April 2026 06:21:59 +0000 (0:00:00.133) 0:27:02.658 ********** 2026-04-17 06:22:18.708372 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708382 | orchestrator | 2026-04-17 06:22:18.708393 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:22:18.708405 | orchestrator | Friday 17 April 2026 06:22:00 +0000 (0:00:00.145) 0:27:02.803 ********** 2026-04-17 06:22:18.708415 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708426 | orchestrator | 2026-04-17 06:22:18.708436 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:22:18.708447 | orchestrator | Friday 17 April 2026 06:22:00 +0000 (0:00:00.179) 0:27:02.983 ********** 2026-04-17 06:22:18.708458 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708473 | orchestrator | 2026-04-17 06:22:18.708492 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:22:18.708511 | orchestrator | Friday 17 April 2026 06:22:00 +0000 (0:00:00.505) 0:27:03.488 ********** 2026-04-17 06:22:18.708529 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708576 | orchestrator | 2026-04-17 06:22:18.708589 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:22:18.708600 | orchestrator | Friday 17 April 2026 06:22:00 +0000 (0:00:00.140) 0:27:03.629 ********** 2026-04-17 06:22:18.708612 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708624 | orchestrator | 2026-04-17 06:22:18.708637 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:22:18.708649 | orchestrator | Friday 17 April 2026 06:22:01 +0000 (0:00:00.203) 0:27:03.833 ********** 2026-04-17 06:22:18.708662 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708675 | orchestrator | 2026-04-17 06:22:18.708686 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:22:18.708699 | orchestrator | Friday 17 April 2026 06:22:01 +0000 (0:00:00.152) 0:27:03.985 ********** 2026-04-17 06:22:18.708712 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708723 | orchestrator | 2026-04-17 06:22:18.708735 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:22:18.708747 | orchestrator | Friday 17 April 2026 06:22:01 +0000 (0:00:00.164) 0:27:04.150 ********** 2026-04-17 06:22:18.708760 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:22:18.708800 | orchestrator | 2026-04-17 06:22:18.708813 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:22:18.708826 | orchestrator | Friday 17 April 2026 06:22:04 +0000 (0:00:03.421) 0:27:07.571 ********** 2026-04-17 06:22:18.708838 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:22:18.708852 | orchestrator | 2026-04-17 06:22:18.708865 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:22:18.708878 | orchestrator | Friday 17 April 2026 06:22:05 +0000 (0:00:00.180) 0:27:07.752 ********** 2026-04-17 06:22:18.708893 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-17 06:22:18.708910 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-17 06:22:18.708925 | orchestrator | 2026-04-17 06:22:18.708938 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:22:18.708950 | orchestrator | Friday 17 April 2026 06:22:08 +0000 (0:00:03.871) 0:27:11.623 ********** 2026-04-17 06:22:18.708963 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.708974 | orchestrator | 2026-04-17 06:22:18.709000 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:22:18.709013 | orchestrator | Friday 17 April 2026 06:22:09 +0000 (0:00:00.147) 0:27:11.771 ********** 2026-04-17 06:22:18.709031 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.709049 | orchestrator | 2026-04-17 06:22:18.709065 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:22:18.709104 | orchestrator | Friday 17 April 2026 06:22:09 +0000 (0:00:00.124) 0:27:11.896 ********** 2026-04-17 06:22:18.709123 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.709141 | orchestrator | 2026-04-17 06:22:18.709159 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:22:18.709179 | orchestrator | Friday 17 April 2026 06:22:09 +0000 (0:00:00.174) 0:27:12.071 ********** 2026-04-17 06:22:18.709198 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.709215 | orchestrator | 2026-04-17 06:22:18.709243 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:22:18.709254 | orchestrator | Friday 17 April 2026 06:22:09 +0000 (0:00:00.166) 0:27:12.237 ********** 2026-04-17 06:22:18.709265 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.709275 | orchestrator | 2026-04-17 06:22:18.709286 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:22:18.709297 | orchestrator | Friday 17 April 2026 06:22:09 +0000 (0:00:00.159) 0:27:12.396 ********** 2026-04-17 06:22:18.709307 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:22:18.709319 | orchestrator | 2026-04-17 06:22:18.709329 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:22:18.709340 | orchestrator | Friday 17 April 2026 06:22:10 +0000 (0:00:00.689) 0:27:13.086 ********** 2026-04-17 06:22:18.709350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:22:18.709362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:22:18.709372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:22:18.709383 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.709394 | orchestrator | 2026-04-17 06:22:18.709404 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:22:18.709415 | orchestrator | Friday 17 April 2026 06:22:10 +0000 (0:00:00.507) 0:27:13.594 ********** 2026-04-17 06:22:18.709425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:22:18.709436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:22:18.709446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:22:18.709457 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.709468 | orchestrator | 2026-04-17 06:22:18.709478 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:22:18.709489 | orchestrator | Friday 17 April 2026 06:22:11 +0000 (0:00:00.458) 0:27:14.053 ********** 2026-04-17 06:22:18.709500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 06:22:18.709510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 06:22:18.709521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 06:22:18.709531 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.709542 | orchestrator | 2026-04-17 06:22:18.709553 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:22:18.709563 | orchestrator | Friday 17 April 2026 06:22:11 +0000 (0:00:00.435) 0:27:14.488 ********** 2026-04-17 06:22:18.709574 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:22:18.709585 | orchestrator | 2026-04-17 06:22:18.709595 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:22:18.709606 | orchestrator | Friday 17 April 2026 06:22:11 +0000 (0:00:00.171) 0:27:14.659 ********** 2026-04-17 06:22:18.709616 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 06:22:18.709627 | orchestrator | 2026-04-17 06:22:18.709637 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:22:18.709648 | orchestrator | Friday 17 April 2026 06:22:12 +0000 (0:00:00.511) 0:27:15.171 ********** 2026-04-17 06:22:18.709658 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:22:18.709669 | orchestrator | 2026-04-17 06:22:18.709680 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-17 06:22:18.709690 | orchestrator | Friday 17 April 2026 06:22:13 +0000 (0:00:00.860) 0:27:16.031 ********** 2026-04-17 06:22:18.709701 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-04-17 06:22:18.709711 | orchestrator | 2026-04-17 06:22:18.709722 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 06:22:18.709732 | orchestrator | Friday 17 April 2026 06:22:13 +0000 (0:00:00.588) 0:27:16.620 ********** 2026-04-17 06:22:18.709743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:22:18.709753 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 06:22:18.709799 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:22:18.709812 | orchestrator | 2026-04-17 06:22:18.709823 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:22:18.709833 | orchestrator | Friday 17 April 2026 06:22:16 +0000 (0:00:02.218) 0:27:18.839 ********** 2026-04-17 06:22:18.709843 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-17 06:22:18.709854 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 06:22:18.709865 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:22:18.709875 | orchestrator | 2026-04-17 06:22:18.709886 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-17 06:22:18.709896 | orchestrator | Friday 17 April 2026 06:22:17 +0000 (0:00:01.005) 0:27:19.844 ********** 2026-04-17 06:22:18.709907 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:22:18.709917 | orchestrator | 2026-04-17 06:22:18.709928 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-17 06:22:18.709938 | orchestrator | Friday 17 April 2026 06:22:17 +0000 (0:00:00.518) 0:27:20.362 ********** 2026-04-17 06:22:18.709955 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-04-17 06:22:18.709967 | orchestrator | 2026-04-17 06:22:18.709977 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-17 06:22:18.709988 | orchestrator | Friday 17 April 2026 06:22:18 +0000 (0:00:00.587) 0:27:20.950 ********** 2026-04-17 06:22:18.710006 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:23:10.311436 | orchestrator | 2026-04-17 06:23:10.311551 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-17 06:23:10.311568 | orchestrator | Friday 17 April 2026 06:22:18 +0000 (0:00:00.610) 0:27:21.560 ********** 2026-04-17 06:23:10.311580 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:23:10.311593 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 06:23:10.311604 | orchestrator | 2026-04-17 06:23:10.311615 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 06:23:10.311626 | orchestrator | Friday 17 April 2026 06:22:23 +0000 (0:00:04.223) 0:27:25.783 ********** 2026-04-17 06:23:10.311637 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:23:10.311648 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:23:10.311659 | orchestrator | 2026-04-17 06:23:10.311670 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:23:10.311680 | orchestrator | Friday 17 April 2026 06:22:25 +0000 (0:00:02.095) 0:27:27.879 ********** 2026-04-17 06:23:10.311692 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-17 06:23:10.311704 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:23:10.311715 | orchestrator | 2026-04-17 06:23:10.311726 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-17 06:23:10.311737 | orchestrator | Friday 17 April 2026 06:22:26 +0000 (0:00:01.025) 0:27:28.904 ********** 2026-04-17 06:23:10.311748 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-17 06:23:10.311759 | orchestrator | 2026-04-17 06:23:10.311770 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-17 06:23:10.311781 | orchestrator | Friday 17 April 2026 06:22:26 +0000 (0:00:00.628) 0:27:29.533 ********** 2026-04-17 06:23:10.311792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.311803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.311903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.311919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.311930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.311940 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:23:10.311951 | orchestrator | 2026-04-17 06:23:10.311980 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-17 06:23:10.312004 | orchestrator | Friday 17 April 2026 06:22:27 +0000 (0:00:00.993) 0:27:30.526 ********** 2026-04-17 06:23:10.312017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.312030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.312042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.312054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.312066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:23:10.312079 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:23:10.312091 | orchestrator | 2026-04-17 06:23:10.312103 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-17 06:23:10.312115 | orchestrator | Friday 17 April 2026 06:22:28 +0000 (0:00:01.031) 0:27:31.558 ********** 2026-04-17 06:23:10.312127 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:23:10.312141 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:23:10.312154 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:23:10.312183 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:23:10.312197 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:23:10.312209 | orchestrator | 2026-04-17 06:23:10.312221 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-17 06:23:10.312252 | orchestrator | Friday 17 April 2026 06:22:58 +0000 (0:00:30.186) 0:28:01.745 ********** 2026-04-17 06:23:10.312264 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:23:10.312275 | orchestrator | 2026-04-17 06:23:10.312285 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-17 06:23:10.312296 | orchestrator | Friday 17 April 2026 06:22:59 +0000 (0:00:00.125) 0:28:01.870 ********** 2026-04-17 06:23:10.312307 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:23:10.312318 | orchestrator | 2026-04-17 06:23:10.312328 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-17 06:23:10.312339 | orchestrator | Friday 17 April 2026 06:22:59 +0000 (0:00:00.496) 0:28:02.367 ********** 2026-04-17 06:23:10.312350 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-04-17 06:23:10.312360 | orchestrator | 2026-04-17 06:23:10.312371 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-17 06:23:10.312382 | orchestrator | Friday 17 April 2026 06:23:00 +0000 (0:00:00.631) 0:28:02.998 ********** 2026-04-17 06:23:10.312401 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-04-17 06:23:10.312412 | orchestrator | 2026-04-17 06:23:10.312423 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-17 06:23:10.312433 | orchestrator | Friday 17 April 2026 06:23:00 +0000 (0:00:00.571) 0:28:03.569 ********** 2026-04-17 06:23:10.312444 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:23:10.312455 | orchestrator | 2026-04-17 06:23:10.312466 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-17 06:23:10.312476 | orchestrator | Friday 17 April 2026 06:23:01 +0000 (0:00:01.095) 0:28:04.664 ********** 2026-04-17 06:23:10.312487 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:23:10.312498 | orchestrator | 2026-04-17 06:23:10.312509 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-17 06:23:10.312519 | orchestrator | Friday 17 April 2026 06:23:02 +0000 (0:00:00.935) 0:28:05.600 ********** 2026-04-17 06:23:10.312530 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:23:10.312540 | orchestrator | 2026-04-17 06:23:10.312551 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-17 06:23:10.312562 | orchestrator | Friday 17 April 2026 06:23:04 +0000 (0:00:01.269) 0:28:06.869 ********** 2026-04-17 06:23:10.312573 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 06:23:10.312583 | orchestrator | 2026-04-17 06:23:10.312594 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-17 06:23:10.312605 | orchestrator | 2026-04-17 06:23:10.312615 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:23:10.312626 | orchestrator | Friday 17 April 2026 06:23:06 +0000 (0:00:02.446) 0:28:09.316 ********** 2026-04-17 06:23:10.312636 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-17 06:23:10.312647 | orchestrator | 2026-04-17 06:23:10.312657 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:23:10.312668 | orchestrator | Friday 17 April 2026 06:23:06 +0000 (0:00:00.277) 0:28:09.594 ********** 2026-04-17 06:23:10.312679 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:10.312689 | orchestrator | 2026-04-17 06:23:10.312700 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:23:10.312710 | orchestrator | Friday 17 April 2026 06:23:07 +0000 (0:00:00.801) 0:28:10.396 ********** 2026-04-17 06:23:10.312721 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:10.312731 | orchestrator | 2026-04-17 06:23:10.312742 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:23:10.312752 | orchestrator | Friday 17 April 2026 06:23:07 +0000 (0:00:00.162) 0:28:10.559 ********** 2026-04-17 06:23:10.312763 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:10.312773 | orchestrator | 2026-04-17 06:23:10.312784 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:23:10.312795 | orchestrator | Friday 17 April 2026 06:23:08 +0000 (0:00:00.520) 0:28:11.079 ********** 2026-04-17 06:23:10.312806 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:10.312841 | orchestrator | 2026-04-17 06:23:10.312854 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:23:10.312865 | orchestrator | Friday 17 April 2026 06:23:08 +0000 (0:00:00.153) 0:28:11.233 ********** 2026-04-17 06:23:10.312875 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:10.312886 | orchestrator | 2026-04-17 06:23:10.312897 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:23:10.312908 | orchestrator | Friday 17 April 2026 06:23:08 +0000 (0:00:00.158) 0:28:11.391 ********** 2026-04-17 06:23:10.312919 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:10.312929 | orchestrator | 2026-04-17 06:23:10.312940 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:23:10.312951 | orchestrator | Friday 17 April 2026 06:23:08 +0000 (0:00:00.167) 0:28:11.558 ********** 2026-04-17 06:23:10.312969 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:10.312980 | orchestrator | 2026-04-17 06:23:10.312991 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:23:10.313001 | orchestrator | Friday 17 April 2026 06:23:08 +0000 (0:00:00.159) 0:28:11.718 ********** 2026-04-17 06:23:10.313012 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:10.313023 | orchestrator | 2026-04-17 06:23:10.313033 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:23:10.313050 | orchestrator | Friday 17 April 2026 06:23:09 +0000 (0:00:00.172) 0:28:11.890 ********** 2026-04-17 06:23:10.313061 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:23:10.313072 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:23:10.313083 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:23:10.313094 | orchestrator | 2026-04-17 06:23:10.313104 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:23:10.313122 | orchestrator | Friday 17 April 2026 06:23:10 +0000 (0:00:01.152) 0:28:13.043 ********** 2026-04-17 06:23:17.812184 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:17.812298 | orchestrator | 2026-04-17 06:23:17.812315 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:23:17.812328 | orchestrator | Friday 17 April 2026 06:23:10 +0000 (0:00:00.290) 0:28:13.334 ********** 2026-04-17 06:23:17.812339 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:23:17.812351 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:23:17.812362 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:23:17.812373 | orchestrator | 2026-04-17 06:23:17.812384 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:23:17.812394 | orchestrator | Friday 17 April 2026 06:23:12 +0000 (0:00:02.248) 0:28:15.583 ********** 2026-04-17 06:23:17.812405 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 06:23:17.812417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 06:23:17.812428 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 06:23:17.812438 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.812449 | orchestrator | 2026-04-17 06:23:17.812460 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:23:17.812471 | orchestrator | Friday 17 April 2026 06:23:13 +0000 (0:00:00.637) 0:28:16.220 ********** 2026-04-17 06:23:17.812483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:23:17.812498 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:23:17.812509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:23:17.812520 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.812531 | orchestrator | 2026-04-17 06:23:17.812542 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:23:17.812552 | orchestrator | Friday 17 April 2026 06:23:14 +0000 (0:00:00.824) 0:28:17.045 ********** 2026-04-17 06:23:17.812566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:17.812603 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:17.812616 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:17.812627 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.812638 | orchestrator | 2026-04-17 06:23:17.812649 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:23:17.812660 | orchestrator | Friday 17 April 2026 06:23:14 +0000 (0:00:00.416) 0:28:17.461 ********** 2026-04-17 06:23:17.812704 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:23:11.123453', 'end': '2026-04-17 06:23:11.181026', 'delta': '0:00:00.057573', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:23:17.812720 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:23:12.125156', 'end': '2026-04-17 06:23:12.172706', 'delta': '0:00:00.047550', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:23:17.812734 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:23:12.659128', 'end': '2026-04-17 06:23:12.701969', 'delta': '0:00:00.042841', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:23:17.812747 | orchestrator | 2026-04-17 06:23:17.812759 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:23:17.812772 | orchestrator | Friday 17 April 2026 06:23:14 +0000 (0:00:00.179) 0:28:17.641 ********** 2026-04-17 06:23:17.812791 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:17.812803 | orchestrator | 2026-04-17 06:23:17.812816 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:23:17.812857 | orchestrator | Friday 17 April 2026 06:23:15 +0000 (0:00:00.232) 0:28:17.873 ********** 2026-04-17 06:23:17.812870 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.812882 | orchestrator | 2026-04-17 06:23:17.812894 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:23:17.812906 | orchestrator | Friday 17 April 2026 06:23:15 +0000 (0:00:00.251) 0:28:18.125 ********** 2026-04-17 06:23:17.812918 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:17.812929 | orchestrator | 2026-04-17 06:23:17.812941 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:23:17.812953 | orchestrator | Friday 17 April 2026 06:23:15 +0000 (0:00:00.142) 0:28:18.267 ********** 2026-04-17 06:23:17.812966 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:23:17.812978 | orchestrator | 2026-04-17 06:23:17.812990 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:23:17.813002 | orchestrator | Friday 17 April 2026 06:23:16 +0000 (0:00:00.956) 0:28:19.224 ********** 2026-04-17 06:23:17.813015 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:17.813026 | orchestrator | 2026-04-17 06:23:17.813038 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:23:17.813050 | orchestrator | Friday 17 April 2026 06:23:16 +0000 (0:00:00.152) 0:28:19.377 ********** 2026-04-17 06:23:17.813062 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.813075 | orchestrator | 2026-04-17 06:23:17.813087 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:23:17.813098 | orchestrator | Friday 17 April 2026 06:23:16 +0000 (0:00:00.140) 0:28:19.518 ********** 2026-04-17 06:23:17.813108 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.813119 | orchestrator | 2026-04-17 06:23:17.813130 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:23:17.813140 | orchestrator | Friday 17 April 2026 06:23:17 +0000 (0:00:00.243) 0:28:19.762 ********** 2026-04-17 06:23:17.813151 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.813161 | orchestrator | 2026-04-17 06:23:17.813172 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:23:17.813182 | orchestrator | Friday 17 April 2026 06:23:17 +0000 (0:00:00.150) 0:28:19.912 ********** 2026-04-17 06:23:17.813193 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.813203 | orchestrator | 2026-04-17 06:23:17.813214 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:23:17.813225 | orchestrator | Friday 17 April 2026 06:23:17 +0000 (0:00:00.130) 0:28:20.042 ********** 2026-04-17 06:23:17.813235 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:17.813246 | orchestrator | 2026-04-17 06:23:17.813257 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:23:17.813267 | orchestrator | Friday 17 April 2026 06:23:17 +0000 (0:00:00.187) 0:28:20.230 ********** 2026-04-17 06:23:17.813278 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:17.813288 | orchestrator | 2026-04-17 06:23:17.813299 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:23:17.813310 | orchestrator | Friday 17 April 2026 06:23:17 +0000 (0:00:00.138) 0:28:20.368 ********** 2026-04-17 06:23:17.813320 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:17.813331 | orchestrator | 2026-04-17 06:23:17.813342 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:23:17.813360 | orchestrator | Friday 17 April 2026 06:23:17 +0000 (0:00:00.183) 0:28:20.551 ********** 2026-04-17 06:23:18.732473 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:18.732575 | orchestrator | 2026-04-17 06:23:18.732591 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:23:18.732604 | orchestrator | Friday 17 April 2026 06:23:18 +0000 (0:00:00.531) 0:28:21.083 ********** 2026-04-17 06:23:18.732639 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:18.732652 | orchestrator | 2026-04-17 06:23:18.732663 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:23:18.732674 | orchestrator | Friday 17 April 2026 06:23:18 +0000 (0:00:00.168) 0:28:21.252 ********** 2026-04-17 06:23:18.732729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:23:18.732748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'uuids': ['0c9a4a4e-baea-4a48-b886-e6edd30675e6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB']}})  2026-04-17 06:23:18.732762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdcd9064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:23:18.732775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd']}})  2026-04-17 06:23:18.732787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:23:18.732804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:23:18.732886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:23:18.732910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:23:18.732922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM', 'dm-uuid-CRYPT-LUKS2-23d95080c3d748658de3cafbcbf22080-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:23:18.732933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:23:18.732944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'uuids': ['23d95080-c3d7-4865-8de3-cafbcbf22080'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM']}})  2026-04-17 06:23:18.732956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0']}})  2026-04-17 06:23:18.732973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:23:18.732999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '11ed6889', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:23:19.061352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:23:19.061466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:23:19.061492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB', 'dm-uuid-CRYPT-LUKS2-0c9a4a4ebaea4a48b886e6edd30675e6-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:23:19.061517 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:19.061537 | orchestrator | 2026-04-17 06:23:19.061557 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:23:19.061576 | orchestrator | Friday 17 April 2026 06:23:18 +0000 (0:00:00.357) 0:28:21.609 ********** 2026-04-17 06:23:19.061618 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:19.061655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0', 'dm-uuid-LVM-x8wPNc9ppABx7omkNjwDsZ36srhxaotWN2sw2kSuQlI1whwt0obeiQkPsGz0OLLB'], 'uuids': ['0c9a4a4e-baea-4a48-b886-e6edd30675e6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:19.061668 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784', 'scsi-SQEMU_QEMU_HARDDISK_cdcd9064-7955-4761-96c4-269b5aa6d784'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdcd9064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:19.061699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-EksHNS-9Lf8-MU98-0Ni7-TkM1-Ad96-Nm3L8n', 'scsi-0QEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4', 'scsi-SQEMU_QEMU_HARDDISK_ea8ffa79-e5e6-4d64-b884-dbf56eae3ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:19.061714 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:19.061745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:19.061765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:19.061776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:19.061795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM', 'dm-uuid-CRYPT-LUKS2-23d95080c3d748658de3cafbcbf22080-kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.358814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.358936 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b2b01680--30d5--524c--a810--0db40fd977fd-osd--block--b2b01680--30d5--524c--a810--0db40fd977fd', 'dm-uuid-LVM-UEl0XX7dQucfhZdh7UAdzyFehWxhVFddkbHrba8CuNNj2i7S0Tbe32fpnBhCZbbM'], 'uuids': ['23d95080-c3d7-4865-8de3-cafbcbf22080'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ea8ffa79', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kbHrba-8CuN-Nj2i-7S0T-be32-fpnB-hCZbbM']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.358986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-zYr2Nh-d4ad-Ek20-HAf2-q5UC-ssNp-SAMeIq', 'scsi-0QEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96', 'scsi-SQEMU_QEMU_HARDDISK_193d71a8-114c-4752-adc0-dee4f1d71a96'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '193d71a8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1504e56e--19fb--5fe8--bf47--cc017f2297d0-osd--block--1504e56e--19fb--5fe8--bf47--cc017f2297d0']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.358998 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.359023 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '11ed6889', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ed6889-50a7-45eb-8f5f-b49aa967e3d6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.359043 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.359052 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.359060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB', 'dm-uuid-CRYPT-LUKS2-0c9a4a4ebaea4a48b886e6edd30675e6-N2sw2k-SuQl-I1wh-wt0o-beiQ-kPsG-z0OLLB'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:23:20.359069 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:20.359078 | orchestrator | 2026-04-17 06:23:20.359086 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:23:20.359094 | orchestrator | Friday 17 April 2026 06:23:19 +0000 (0:00:00.408) 0:28:22.018 ********** 2026-04-17 06:23:20.359102 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:20.359109 | orchestrator | 2026-04-17 06:23:20.359117 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:23:20.359124 | orchestrator | Friday 17 April 2026 06:23:19 +0000 (0:00:00.465) 0:28:22.484 ********** 2026-04-17 06:23:20.359131 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:20.359138 | orchestrator | 2026-04-17 06:23:20.359145 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:23:20.359153 | orchestrator | Friday 17 April 2026 06:23:19 +0000 (0:00:00.120) 0:28:22.605 ********** 2026-04-17 06:23:20.359160 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:20.359167 | orchestrator | 2026-04-17 06:23:20.359175 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:23:20.359187 | orchestrator | Friday 17 April 2026 06:23:20 +0000 (0:00:00.494) 0:28:23.100 ********** 2026-04-17 06:23:36.065833 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.065940 | orchestrator | 2026-04-17 06:23:36.065949 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:23:36.065956 | orchestrator | Friday 17 April 2026 06:23:20 +0000 (0:00:00.125) 0:28:23.225 ********** 2026-04-17 06:23:36.065962 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.065967 | orchestrator | 2026-04-17 06:23:36.065973 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:23:36.065978 | orchestrator | Friday 17 April 2026 06:23:20 +0000 (0:00:00.267) 0:28:23.493 ********** 2026-04-17 06:23:36.065983 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066003 | orchestrator | 2026-04-17 06:23:36.066009 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:23:36.066054 | orchestrator | Friday 17 April 2026 06:23:20 +0000 (0:00:00.153) 0:28:23.646 ********** 2026-04-17 06:23:36.066060 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-17 06:23:36.066066 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-17 06:23:36.066071 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-17 06:23:36.066076 | orchestrator | 2026-04-17 06:23:36.066081 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:23:36.066087 | orchestrator | Friday 17 April 2026 06:23:22 +0000 (0:00:01.168) 0:28:24.815 ********** 2026-04-17 06:23:36.066092 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 06:23:36.066098 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 06:23:36.066103 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 06:23:36.066108 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066113 | orchestrator | 2026-04-17 06:23:36.066118 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:23:36.066123 | orchestrator | Friday 17 April 2026 06:23:22 +0000 (0:00:00.159) 0:28:24.974 ********** 2026-04-17 06:23:36.066128 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-17 06:23:36.066134 | orchestrator | 2026-04-17 06:23:36.066140 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:23:36.066146 | orchestrator | Friday 17 April 2026 06:23:22 +0000 (0:00:00.642) 0:28:25.616 ********** 2026-04-17 06:23:36.066152 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066157 | orchestrator | 2026-04-17 06:23:36.066172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:23:36.066177 | orchestrator | Friday 17 April 2026 06:23:23 +0000 (0:00:00.156) 0:28:25.772 ********** 2026-04-17 06:23:36.066182 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066187 | orchestrator | 2026-04-17 06:23:36.066192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:23:36.066197 | orchestrator | Friday 17 April 2026 06:23:23 +0000 (0:00:00.162) 0:28:25.935 ********** 2026-04-17 06:23:36.066202 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066207 | orchestrator | 2026-04-17 06:23:36.066212 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:23:36.066217 | orchestrator | Friday 17 April 2026 06:23:23 +0000 (0:00:00.154) 0:28:26.090 ********** 2026-04-17 06:23:36.066222 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:36.066227 | orchestrator | 2026-04-17 06:23:36.066232 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:23:36.066237 | orchestrator | Friday 17 April 2026 06:23:23 +0000 (0:00:00.239) 0:28:26.330 ********** 2026-04-17 06:23:36.066242 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:23:36.066248 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:23:36.066253 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:23:36.066258 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066263 | orchestrator | 2026-04-17 06:23:36.066268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:23:36.066273 | orchestrator | Friday 17 April 2026 06:23:24 +0000 (0:00:00.438) 0:28:26.769 ********** 2026-04-17 06:23:36.066278 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:23:36.066283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:23:36.066288 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:23:36.066293 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066298 | orchestrator | 2026-04-17 06:23:36.066303 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:23:36.066313 | orchestrator | Friday 17 April 2026 06:23:24 +0000 (0:00:00.410) 0:28:27.179 ********** 2026-04-17 06:23:36.066318 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:23:36.066323 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:23:36.066328 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:23:36.066333 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066338 | orchestrator | 2026-04-17 06:23:36.066342 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:23:36.066348 | orchestrator | Friday 17 April 2026 06:23:24 +0000 (0:00:00.413) 0:28:27.593 ********** 2026-04-17 06:23:36.066353 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:36.066358 | orchestrator | 2026-04-17 06:23:36.066362 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:23:36.066367 | orchestrator | Friday 17 April 2026 06:23:25 +0000 (0:00:00.166) 0:28:27.760 ********** 2026-04-17 06:23:36.066373 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 06:23:36.066378 | orchestrator | 2026-04-17 06:23:36.066383 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:23:36.066388 | orchestrator | Friday 17 April 2026 06:23:25 +0000 (0:00:00.396) 0:28:28.156 ********** 2026-04-17 06:23:36.066404 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:23:36.066410 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:23:36.066415 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:23:36.066420 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:23:36.066425 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-17 06:23:36.066430 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:23:36.066436 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:23:36.066441 | orchestrator | 2026-04-17 06:23:36.066446 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:23:36.066451 | orchestrator | Friday 17 April 2026 06:23:26 +0000 (0:00:01.228) 0:28:29.385 ********** 2026-04-17 06:23:36.066456 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:23:36.066461 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:23:36.066466 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:23:36.066471 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:23:36.066476 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-17 06:23:36.066481 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 06:23:36.066486 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:23:36.066491 | orchestrator | 2026-04-17 06:23:36.066495 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-17 06:23:36.066501 | orchestrator | Friday 17 April 2026 06:23:28 +0000 (0:00:01.786) 0:28:31.171 ********** 2026-04-17 06:23:36.066506 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:23:36.066511 | orchestrator | 2026-04-17 06:23:36.066516 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-17 06:23:36.066520 | orchestrator | Friday 17 April 2026 06:23:30 +0000 (0:00:01.580) 0:28:32.752 ********** 2026-04-17 06:23:36.066566 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:23:36.066572 | orchestrator | 2026-04-17 06:23:36.066577 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-17 06:23:36.066587 | orchestrator | Friday 17 April 2026 06:23:31 +0000 (0:00:01.814) 0:28:34.567 ********** 2026-04-17 06:23:36.066593 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:23:36.066598 | orchestrator | 2026-04-17 06:23:36.066603 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:23:36.066607 | orchestrator | Friday 17 April 2026 06:23:33 +0000 (0:00:01.353) 0:28:35.920 ********** 2026-04-17 06:23:36.066612 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-17 06:23:36.066617 | orchestrator | 2026-04-17 06:23:36.066622 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:23:36.066627 | orchestrator | Friday 17 April 2026 06:23:33 +0000 (0:00:00.215) 0:28:36.136 ********** 2026-04-17 06:23:36.066632 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-17 06:23:36.066637 | orchestrator | 2026-04-17 06:23:36.066642 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:23:36.066647 | orchestrator | Friday 17 April 2026 06:23:33 +0000 (0:00:00.219) 0:28:36.355 ********** 2026-04-17 06:23:36.066652 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066657 | orchestrator | 2026-04-17 06:23:36.066662 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:23:36.066667 | orchestrator | Friday 17 April 2026 06:23:33 +0000 (0:00:00.123) 0:28:36.479 ********** 2026-04-17 06:23:36.066672 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:36.066677 | orchestrator | 2026-04-17 06:23:36.066682 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:23:36.066687 | orchestrator | Friday 17 April 2026 06:23:34 +0000 (0:00:00.513) 0:28:36.992 ********** 2026-04-17 06:23:36.066692 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:36.066697 | orchestrator | 2026-04-17 06:23:36.066702 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:23:36.066707 | orchestrator | Friday 17 April 2026 06:23:34 +0000 (0:00:00.538) 0:28:37.530 ********** 2026-04-17 06:23:36.066712 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:36.066717 | orchestrator | 2026-04-17 06:23:36.066722 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:23:36.066727 | orchestrator | Friday 17 April 2026 06:23:35 +0000 (0:00:00.523) 0:28:38.053 ********** 2026-04-17 06:23:36.066732 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066737 | orchestrator | 2026-04-17 06:23:36.066742 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:23:36.066747 | orchestrator | Friday 17 April 2026 06:23:35 +0000 (0:00:00.136) 0:28:38.190 ********** 2026-04-17 06:23:36.066752 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066757 | orchestrator | 2026-04-17 06:23:36.066761 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:23:36.066767 | orchestrator | Friday 17 April 2026 06:23:35 +0000 (0:00:00.121) 0:28:38.312 ********** 2026-04-17 06:23:36.066772 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:36.066776 | orchestrator | 2026-04-17 06:23:36.066781 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:23:36.066790 | orchestrator | Friday 17 April 2026 06:23:36 +0000 (0:00:00.485) 0:28:38.798 ********** 2026-04-17 06:23:47.324305 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.324395 | orchestrator | 2026-04-17 06:23:47.324405 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:23:47.324413 | orchestrator | Friday 17 April 2026 06:23:36 +0000 (0:00:00.534) 0:28:39.333 ********** 2026-04-17 06:23:47.324420 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.324427 | orchestrator | 2026-04-17 06:23:47.324433 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:23:47.324440 | orchestrator | Friday 17 April 2026 06:23:37 +0000 (0:00:00.543) 0:28:39.876 ********** 2026-04-17 06:23:47.324464 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324471 | orchestrator | 2026-04-17 06:23:47.324477 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:23:47.324484 | orchestrator | Friday 17 April 2026 06:23:37 +0000 (0:00:00.150) 0:28:40.026 ********** 2026-04-17 06:23:47.324490 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324496 | orchestrator | 2026-04-17 06:23:47.324502 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:23:47.324508 | orchestrator | Friday 17 April 2026 06:23:37 +0000 (0:00:00.130) 0:28:40.157 ********** 2026-04-17 06:23:47.324514 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.324520 | orchestrator | 2026-04-17 06:23:47.324526 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:23:47.324532 | orchestrator | Friday 17 April 2026 06:23:37 +0000 (0:00:00.167) 0:28:40.325 ********** 2026-04-17 06:23:47.324539 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.324545 | orchestrator | 2026-04-17 06:23:47.324551 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:23:47.324558 | orchestrator | Friday 17 April 2026 06:23:37 +0000 (0:00:00.165) 0:28:40.491 ********** 2026-04-17 06:23:47.324564 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.324570 | orchestrator | 2026-04-17 06:23:47.324576 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:23:47.324582 | orchestrator | Friday 17 April 2026 06:23:37 +0000 (0:00:00.164) 0:28:40.655 ********** 2026-04-17 06:23:47.324588 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324594 | orchestrator | 2026-04-17 06:23:47.324600 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:23:47.324606 | orchestrator | Friday 17 April 2026 06:23:38 +0000 (0:00:00.161) 0:28:40.817 ********** 2026-04-17 06:23:47.324612 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324618 | orchestrator | 2026-04-17 06:23:47.324635 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:23:47.324642 | orchestrator | Friday 17 April 2026 06:23:38 +0000 (0:00:00.139) 0:28:40.956 ********** 2026-04-17 06:23:47.324648 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324655 | orchestrator | 2026-04-17 06:23:47.324661 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:23:47.324667 | orchestrator | Friday 17 April 2026 06:23:38 +0000 (0:00:00.131) 0:28:41.088 ********** 2026-04-17 06:23:47.324674 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.324680 | orchestrator | 2026-04-17 06:23:47.324686 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:23:47.324693 | orchestrator | Friday 17 April 2026 06:23:38 +0000 (0:00:00.178) 0:28:41.267 ********** 2026-04-17 06:23:47.324699 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.324705 | orchestrator | 2026-04-17 06:23:47.324712 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:23:47.324718 | orchestrator | Friday 17 April 2026 06:23:39 +0000 (0:00:00.605) 0:28:41.872 ********** 2026-04-17 06:23:47.324724 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324730 | orchestrator | 2026-04-17 06:23:47.324737 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:23:47.324743 | orchestrator | Friday 17 April 2026 06:23:39 +0000 (0:00:00.136) 0:28:42.009 ********** 2026-04-17 06:23:47.324749 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324756 | orchestrator | 2026-04-17 06:23:47.324762 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:23:47.324768 | orchestrator | Friday 17 April 2026 06:23:39 +0000 (0:00:00.131) 0:28:42.140 ********** 2026-04-17 06:23:47.324775 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324781 | orchestrator | 2026-04-17 06:23:47.324787 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:23:47.324793 | orchestrator | Friday 17 April 2026 06:23:39 +0000 (0:00:00.124) 0:28:42.265 ********** 2026-04-17 06:23:47.324805 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324811 | orchestrator | 2026-04-17 06:23:47.324817 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:23:47.324824 | orchestrator | Friday 17 April 2026 06:23:39 +0000 (0:00:00.160) 0:28:42.425 ********** 2026-04-17 06:23:47.324830 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324836 | orchestrator | 2026-04-17 06:23:47.324843 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:23:47.324885 | orchestrator | Friday 17 April 2026 06:23:39 +0000 (0:00:00.151) 0:28:42.577 ********** 2026-04-17 06:23:47.324894 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324901 | orchestrator | 2026-04-17 06:23:47.324908 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:23:47.324915 | orchestrator | Friday 17 April 2026 06:23:39 +0000 (0:00:00.142) 0:28:42.719 ********** 2026-04-17 06:23:47.324923 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324930 | orchestrator | 2026-04-17 06:23:47.324937 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:23:47.324945 | orchestrator | Friday 17 April 2026 06:23:40 +0000 (0:00:00.149) 0:28:42.869 ********** 2026-04-17 06:23:47.324952 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324959 | orchestrator | 2026-04-17 06:23:47.324966 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:23:47.324973 | orchestrator | Friday 17 April 2026 06:23:40 +0000 (0:00:00.124) 0:28:42.993 ********** 2026-04-17 06:23:47.324980 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.324987 | orchestrator | 2026-04-17 06:23:47.325006 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:23:47.325014 | orchestrator | Friday 17 April 2026 06:23:40 +0000 (0:00:00.129) 0:28:43.123 ********** 2026-04-17 06:23:47.325021 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325028 | orchestrator | 2026-04-17 06:23:47.325035 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:23:47.325042 | orchestrator | Friday 17 April 2026 06:23:40 +0000 (0:00:00.140) 0:28:43.263 ********** 2026-04-17 06:23:47.325049 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325055 | orchestrator | 2026-04-17 06:23:47.325063 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:23:47.325069 | orchestrator | Friday 17 April 2026 06:23:40 +0000 (0:00:00.127) 0:28:43.391 ********** 2026-04-17 06:23:47.325076 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325083 | orchestrator | 2026-04-17 06:23:47.325090 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:23:47.325097 | orchestrator | Friday 17 April 2026 06:23:41 +0000 (0:00:00.560) 0:28:43.951 ********** 2026-04-17 06:23:47.325104 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.325111 | orchestrator | 2026-04-17 06:23:47.325118 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:23:47.325125 | orchestrator | Friday 17 April 2026 06:23:42 +0000 (0:00:00.933) 0:28:44.885 ********** 2026-04-17 06:23:47.325132 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.325139 | orchestrator | 2026-04-17 06:23:47.325146 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:23:47.325153 | orchestrator | Friday 17 April 2026 06:23:43 +0000 (0:00:01.238) 0:28:46.124 ********** 2026-04-17 06:23:47.325160 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-17 06:23:47.325168 | orchestrator | 2026-04-17 06:23:47.325175 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:23:47.325182 | orchestrator | Friday 17 April 2026 06:23:43 +0000 (0:00:00.207) 0:28:46.331 ********** 2026-04-17 06:23:47.325189 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325196 | orchestrator | 2026-04-17 06:23:47.325203 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:23:47.325216 | orchestrator | Friday 17 April 2026 06:23:43 +0000 (0:00:00.145) 0:28:46.477 ********** 2026-04-17 06:23:47.325222 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325228 | orchestrator | 2026-04-17 06:23:47.325238 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:23:47.325245 | orchestrator | Friday 17 April 2026 06:23:43 +0000 (0:00:00.135) 0:28:46.612 ********** 2026-04-17 06:23:47.325251 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:23:47.325257 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:23:47.325263 | orchestrator | 2026-04-17 06:23:47.325270 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:23:47.325276 | orchestrator | Friday 17 April 2026 06:23:44 +0000 (0:00:00.782) 0:28:47.395 ********** 2026-04-17 06:23:47.325282 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.325288 | orchestrator | 2026-04-17 06:23:47.325294 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:23:47.325300 | orchestrator | Friday 17 April 2026 06:23:45 +0000 (0:00:00.447) 0:28:47.843 ********** 2026-04-17 06:23:47.325306 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325312 | orchestrator | 2026-04-17 06:23:47.325318 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:23:47.325324 | orchestrator | Friday 17 April 2026 06:23:45 +0000 (0:00:00.159) 0:28:48.003 ********** 2026-04-17 06:23:47.325330 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325336 | orchestrator | 2026-04-17 06:23:47.325342 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:23:47.325349 | orchestrator | Friday 17 April 2026 06:23:45 +0000 (0:00:00.175) 0:28:48.179 ********** 2026-04-17 06:23:47.325355 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325361 | orchestrator | 2026-04-17 06:23:47.325367 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:23:47.325373 | orchestrator | Friday 17 April 2026 06:23:45 +0000 (0:00:00.136) 0:28:48.315 ********** 2026-04-17 06:23:47.325379 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-17 06:23:47.325385 | orchestrator | 2026-04-17 06:23:47.325391 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:23:47.325397 | orchestrator | Friday 17 April 2026 06:23:45 +0000 (0:00:00.219) 0:28:48.535 ********** 2026-04-17 06:23:47.325403 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:23:47.325409 | orchestrator | 2026-04-17 06:23:47.325416 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:23:47.325422 | orchestrator | Friday 17 April 2026 06:23:46 +0000 (0:00:01.048) 0:28:49.584 ********** 2026-04-17 06:23:47.325428 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:23:47.325434 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:23:47.325440 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:23:47.325446 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325452 | orchestrator | 2026-04-17 06:23:47.325458 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:23:47.325464 | orchestrator | Friday 17 April 2026 06:23:46 +0000 (0:00:00.153) 0:28:49.738 ********** 2026-04-17 06:23:47.325470 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325476 | orchestrator | 2026-04-17 06:23:47.325482 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:23:47.325488 | orchestrator | Friday 17 April 2026 06:23:47 +0000 (0:00:00.133) 0:28:49.872 ********** 2026-04-17 06:23:47.325494 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:23:47.325500 | orchestrator | 2026-04-17 06:23:47.325510 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:24:05.411768 | orchestrator | Friday 17 April 2026 06:23:47 +0000 (0:00:00.190) 0:28:50.062 ********** 2026-04-17 06:24:05.411943 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.411959 | orchestrator | 2026-04-17 06:24:05.411967 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:24:05.411974 | orchestrator | Friday 17 April 2026 06:23:47 +0000 (0:00:00.180) 0:28:50.243 ********** 2026-04-17 06:24:05.411981 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.411988 | orchestrator | 2026-04-17 06:24:05.411995 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:24:05.412002 | orchestrator | Friday 17 April 2026 06:23:47 +0000 (0:00:00.149) 0:28:50.393 ********** 2026-04-17 06:24:05.412008 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412015 | orchestrator | 2026-04-17 06:24:05.412022 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:24:05.412028 | orchestrator | Friday 17 April 2026 06:23:47 +0000 (0:00:00.167) 0:28:50.560 ********** 2026-04-17 06:24:05.412035 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:24:05.412043 | orchestrator | 2026-04-17 06:24:05.412049 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:24:05.412057 | orchestrator | Friday 17 April 2026 06:23:49 +0000 (0:00:01.548) 0:28:52.108 ********** 2026-04-17 06:24:05.412064 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:24:05.412071 | orchestrator | 2026-04-17 06:24:05.412077 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:24:05.412084 | orchestrator | Friday 17 April 2026 06:23:49 +0000 (0:00:00.155) 0:28:52.264 ********** 2026-04-17 06:24:05.412091 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-17 06:24:05.412098 | orchestrator | 2026-04-17 06:24:05.412104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:24:05.412111 | orchestrator | Friday 17 April 2026 06:23:49 +0000 (0:00:00.226) 0:28:52.490 ********** 2026-04-17 06:24:05.412117 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412124 | orchestrator | 2026-04-17 06:24:05.412131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:24:05.412137 | orchestrator | Friday 17 April 2026 06:23:49 +0000 (0:00:00.155) 0:28:52.645 ********** 2026-04-17 06:24:05.412144 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412151 | orchestrator | 2026-04-17 06:24:05.412170 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:24:05.412177 | orchestrator | Friday 17 April 2026 06:23:50 +0000 (0:00:00.153) 0:28:52.799 ********** 2026-04-17 06:24:05.412184 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412191 | orchestrator | 2026-04-17 06:24:05.412197 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:24:05.412204 | orchestrator | Friday 17 April 2026 06:23:50 +0000 (0:00:00.515) 0:28:53.314 ********** 2026-04-17 06:24:05.412211 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412217 | orchestrator | 2026-04-17 06:24:05.412224 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:24:05.412231 | orchestrator | Friday 17 April 2026 06:23:50 +0000 (0:00:00.145) 0:28:53.460 ********** 2026-04-17 06:24:05.412237 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412244 | orchestrator | 2026-04-17 06:24:05.412251 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:24:05.412257 | orchestrator | Friday 17 April 2026 06:23:50 +0000 (0:00:00.177) 0:28:53.638 ********** 2026-04-17 06:24:05.412264 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412271 | orchestrator | 2026-04-17 06:24:05.412277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:24:05.412284 | orchestrator | Friday 17 April 2026 06:23:51 +0000 (0:00:00.150) 0:28:53.788 ********** 2026-04-17 06:24:05.412290 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412297 | orchestrator | 2026-04-17 06:24:05.412305 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:24:05.412318 | orchestrator | Friday 17 April 2026 06:23:51 +0000 (0:00:00.152) 0:28:53.941 ********** 2026-04-17 06:24:05.412326 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412334 | orchestrator | 2026-04-17 06:24:05.412341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:24:05.412349 | orchestrator | Friday 17 April 2026 06:23:51 +0000 (0:00:00.165) 0:28:54.106 ********** 2026-04-17 06:24:05.412357 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:24:05.412364 | orchestrator | 2026-04-17 06:24:05.412372 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:24:05.412380 | orchestrator | Friday 17 April 2026 06:23:51 +0000 (0:00:00.246) 0:28:54.352 ********** 2026-04-17 06:24:05.412387 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-17 06:24:05.412396 | orchestrator | 2026-04-17 06:24:05.412403 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:24:05.412411 | orchestrator | Friday 17 April 2026 06:23:51 +0000 (0:00:00.208) 0:28:54.561 ********** 2026-04-17 06:24:05.412419 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-17 06:24:05.412427 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-17 06:24:05.412435 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-17 06:24:05.412442 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-17 06:24:05.412448 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-17 06:24:05.412455 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-17 06:24:05.412461 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-17 06:24:05.412468 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:24:05.412475 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:24:05.412482 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:24:05.412489 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:24:05.412509 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:24:05.412517 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:24:05.412524 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:24:05.412530 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-17 06:24:05.412537 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-17 06:24:05.412544 | orchestrator | 2026-04-17 06:24:05.412551 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:24:05.412557 | orchestrator | Friday 17 April 2026 06:23:57 +0000 (0:00:05.443) 0:29:00.004 ********** 2026-04-17 06:24:05.412564 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-17 06:24:05.412571 | orchestrator | 2026-04-17 06:24:05.412577 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 06:24:05.412584 | orchestrator | Friday 17 April 2026 06:23:57 +0000 (0:00:00.199) 0:29:00.204 ********** 2026-04-17 06:24:05.412590 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:24:05.412598 | orchestrator | 2026-04-17 06:24:05.412605 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 06:24:05.412611 | orchestrator | Friday 17 April 2026 06:23:58 +0000 (0:00:00.871) 0:29:01.075 ********** 2026-04-17 06:24:05.412618 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:24:05.412625 | orchestrator | 2026-04-17 06:24:05.412631 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:24:05.412638 | orchestrator | Friday 17 April 2026 06:23:59 +0000 (0:00:00.985) 0:29:02.061 ********** 2026-04-17 06:24:05.412644 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412656 | orchestrator | 2026-04-17 06:24:05.412663 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:24:05.412669 | orchestrator | Friday 17 April 2026 06:23:59 +0000 (0:00:00.177) 0:29:02.238 ********** 2026-04-17 06:24:05.412676 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412686 | orchestrator | 2026-04-17 06:24:05.412693 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:24:05.412700 | orchestrator | Friday 17 April 2026 06:23:59 +0000 (0:00:00.152) 0:29:02.390 ********** 2026-04-17 06:24:05.412706 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412713 | orchestrator | 2026-04-17 06:24:05.412720 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:24:05.412726 | orchestrator | Friday 17 April 2026 06:23:59 +0000 (0:00:00.153) 0:29:02.544 ********** 2026-04-17 06:24:05.412733 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412739 | orchestrator | 2026-04-17 06:24:05.412746 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:24:05.412752 | orchestrator | Friday 17 April 2026 06:23:59 +0000 (0:00:00.126) 0:29:02.670 ********** 2026-04-17 06:24:05.412759 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412766 | orchestrator | 2026-04-17 06:24:05.412772 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:24:05.412779 | orchestrator | Friday 17 April 2026 06:24:00 +0000 (0:00:00.151) 0:29:02.822 ********** 2026-04-17 06:24:05.412786 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412792 | orchestrator | 2026-04-17 06:24:05.412799 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:24:05.412805 | orchestrator | Friday 17 April 2026 06:24:00 +0000 (0:00:00.145) 0:29:02.967 ********** 2026-04-17 06:24:05.412812 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412818 | orchestrator | 2026-04-17 06:24:05.412829 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:24:05.412840 | orchestrator | Friday 17 April 2026 06:24:00 +0000 (0:00:00.139) 0:29:03.106 ********** 2026-04-17 06:24:05.412850 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412862 | orchestrator | 2026-04-17 06:24:05.412890 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:24:05.412902 | orchestrator | Friday 17 April 2026 06:24:00 +0000 (0:00:00.160) 0:29:03.266 ********** 2026-04-17 06:24:05.412913 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412925 | orchestrator | 2026-04-17 06:24:05.412936 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:24:05.412947 | orchestrator | Friday 17 April 2026 06:24:00 +0000 (0:00:00.147) 0:29:03.414 ********** 2026-04-17 06:24:05.412959 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.412971 | orchestrator | 2026-04-17 06:24:05.412982 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:24:05.412994 | orchestrator | Friday 17 April 2026 06:24:00 +0000 (0:00:00.142) 0:29:03.557 ********** 2026-04-17 06:24:05.413003 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:05.413010 | orchestrator | 2026-04-17 06:24:05.413016 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:24:05.413023 | orchestrator | Friday 17 April 2026 06:24:00 +0000 (0:00:00.162) 0:29:03.720 ********** 2026-04-17 06:24:05.413030 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:24:05.413036 | orchestrator | 2026-04-17 06:24:05.413043 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:24:05.413049 | orchestrator | Friday 17 April 2026 06:24:05 +0000 (0:00:04.240) 0:29:07.960 ********** 2026-04-17 06:24:05.413056 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:24:05.413063 | orchestrator | 2026-04-17 06:24:05.413082 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:24:27.295559 | orchestrator | Friday 17 April 2026 06:24:05 +0000 (0:00:00.185) 0:29:08.146 ********** 2026-04-17 06:24:27.295679 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-17 06:24:27.295699 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-17 06:24:27.295712 | orchestrator | 2026-04-17 06:24:27.295724 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:24:27.295735 | orchestrator | Friday 17 April 2026 06:24:09 +0000 (0:00:03.884) 0:29:12.031 ********** 2026-04-17 06:24:27.295746 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.295758 | orchestrator | 2026-04-17 06:24:27.295769 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:24:27.295780 | orchestrator | Friday 17 April 2026 06:24:09 +0000 (0:00:00.141) 0:29:12.172 ********** 2026-04-17 06:24:27.295790 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.295801 | orchestrator | 2026-04-17 06:24:27.295812 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:24:27.295825 | orchestrator | Friday 17 April 2026 06:24:09 +0000 (0:00:00.121) 0:29:12.294 ********** 2026-04-17 06:24:27.295835 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.295846 | orchestrator | 2026-04-17 06:24:27.295857 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:24:27.295942 | orchestrator | Friday 17 April 2026 06:24:09 +0000 (0:00:00.153) 0:29:12.448 ********** 2026-04-17 06:24:27.295957 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.295968 | orchestrator | 2026-04-17 06:24:27.295979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:24:27.295990 | orchestrator | Friday 17 April 2026 06:24:09 +0000 (0:00:00.169) 0:29:12.617 ********** 2026-04-17 06:24:27.296000 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.296011 | orchestrator | 2026-04-17 06:24:27.296022 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:24:27.296033 | orchestrator | Friday 17 April 2026 06:24:10 +0000 (0:00:00.156) 0:29:12.774 ********** 2026-04-17 06:24:27.296043 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:24:27.296055 | orchestrator | 2026-04-17 06:24:27.296066 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:24:27.296077 | orchestrator | Friday 17 April 2026 06:24:10 +0000 (0:00:00.260) 0:29:13.034 ********** 2026-04-17 06:24:27.296088 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:24:27.296102 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:24:27.296114 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:24:27.296126 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.296139 | orchestrator | 2026-04-17 06:24:27.296151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:24:27.296163 | orchestrator | Friday 17 April 2026 06:24:10 +0000 (0:00:00.502) 0:29:13.536 ********** 2026-04-17 06:24:27.296176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:24:27.296188 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:24:27.296201 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:24:27.296213 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.296246 | orchestrator | 2026-04-17 06:24:27.296260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:24:27.296273 | orchestrator | Friday 17 April 2026 06:24:11 +0000 (0:00:00.443) 0:29:13.980 ********** 2026-04-17 06:24:27.296285 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 06:24:27.296297 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 06:24:27.296312 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 06:24:27.296331 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.296349 | orchestrator | 2026-04-17 06:24:27.296368 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:24:27.296388 | orchestrator | Friday 17 April 2026 06:24:12 +0000 (0:00:00.936) 0:29:14.917 ********** 2026-04-17 06:24:27.296406 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:24:27.296423 | orchestrator | 2026-04-17 06:24:27.296442 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:24:27.296459 | orchestrator | Friday 17 April 2026 06:24:12 +0000 (0:00:00.184) 0:29:15.102 ********** 2026-04-17 06:24:27.296477 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 06:24:27.296496 | orchestrator | 2026-04-17 06:24:27.296513 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:24:27.296531 | orchestrator | Friday 17 April 2026 06:24:13 +0000 (0:00:01.196) 0:29:16.298 ********** 2026-04-17 06:24:27.296550 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:24:27.296569 | orchestrator | 2026-04-17 06:24:27.296587 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-17 06:24:27.296604 | orchestrator | Friday 17 April 2026 06:24:14 +0000 (0:00:00.824) 0:29:17.123 ********** 2026-04-17 06:24:27.296621 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-04-17 06:24:27.296639 | orchestrator | 2026-04-17 06:24:27.296683 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 06:24:27.296703 | orchestrator | Friday 17 April 2026 06:24:14 +0000 (0:00:00.212) 0:29:17.335 ********** 2026-04-17 06:24:27.296719 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:24:27.296730 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 06:24:27.296741 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:24:27.296752 | orchestrator | 2026-04-17 06:24:27.296763 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:24:27.296773 | orchestrator | Friday 17 April 2026 06:24:16 +0000 (0:00:02.114) 0:29:19.450 ********** 2026-04-17 06:24:27.296784 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-17 06:24:27.296795 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 06:24:27.296805 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:24:27.296816 | orchestrator | 2026-04-17 06:24:27.296826 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-17 06:24:27.296837 | orchestrator | Friday 17 April 2026 06:24:17 +0000 (0:00:00.998) 0:29:20.449 ********** 2026-04-17 06:24:27.296848 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.296858 | orchestrator | 2026-04-17 06:24:27.296869 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-17 06:24:27.296879 | orchestrator | Friday 17 April 2026 06:24:17 +0000 (0:00:00.135) 0:29:20.584 ********** 2026-04-17 06:24:27.296922 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-04-17 06:24:27.296934 | orchestrator | 2026-04-17 06:24:27.296944 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-17 06:24:27.296955 | orchestrator | Friday 17 April 2026 06:24:18 +0000 (0:00:00.211) 0:29:20.796 ********** 2026-04-17 06:24:27.296966 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:24:27.296978 | orchestrator | 2026-04-17 06:24:27.297002 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-17 06:24:27.297012 | orchestrator | Friday 17 April 2026 06:24:18 +0000 (0:00:00.613) 0:29:21.409 ********** 2026-04-17 06:24:27.297031 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:24:27.297043 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 06:24:27.297054 | orchestrator | 2026-04-17 06:24:27.297064 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 06:24:27.297075 | orchestrator | Friday 17 April 2026 06:24:22 +0000 (0:00:04.022) 0:29:25.432 ********** 2026-04-17 06:24:27.297085 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:24:27.297096 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:24:27.297107 | orchestrator | 2026-04-17 06:24:27.297117 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:24:27.297128 | orchestrator | Friday 17 April 2026 06:24:24 +0000 (0:00:02.059) 0:29:27.491 ********** 2026-04-17 06:24:27.297138 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-17 06:24:27.297149 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:24:27.297160 | orchestrator | 2026-04-17 06:24:27.297170 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-17 06:24:27.297181 | orchestrator | Friday 17 April 2026 06:24:26 +0000 (0:00:01.276) 0:29:28.768 ********** 2026-04-17 06:24:27.297191 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-04-17 06:24:27.297202 | orchestrator | 2026-04-17 06:24:27.297212 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-17 06:24:27.297223 | orchestrator | Friday 17 April 2026 06:24:26 +0000 (0:00:00.236) 0:29:29.005 ********** 2026-04-17 06:24:27.297234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:24:27.297245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:24:27.297256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:24:27.297267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:24:27.297277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:24:27.297288 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:24:27.297299 | orchestrator | 2026-04-17 06:24:27.297309 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-17 06:24:27.297320 | orchestrator | Friday 17 April 2026 06:24:26 +0000 (0:00:00.598) 0:29:29.603 ********** 2026-04-17 06:24:27.297331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:24:27.297341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:24:27.297352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:24:27.297370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:25:12.103534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:25:12.103635 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:25:12.103651 | orchestrator | 2026-04-17 06:25:12.103664 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-17 06:25:12.103696 | orchestrator | Friday 17 April 2026 06:24:27 +0000 (0:00:00.603) 0:29:30.207 ********** 2026-04-17 06:25:12.103708 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:25:12.103720 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:25:12.103731 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:25:12.103742 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:25:12.103753 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:25:12.103764 | orchestrator | 2026-04-17 06:25:12.103776 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-17 06:25:12.103787 | orchestrator | Friday 17 April 2026 06:24:58 +0000 (0:00:31.095) 0:30:01.302 ********** 2026-04-17 06:25:12.103797 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:25:12.103808 | orchestrator | 2026-04-17 06:25:12.103819 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-17 06:25:12.103841 | orchestrator | Friday 17 April 2026 06:24:58 +0000 (0:00:00.132) 0:30:01.435 ********** 2026-04-17 06:25:12.103852 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:25:12.103863 | orchestrator | 2026-04-17 06:25:12.103874 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-17 06:25:12.103884 | orchestrator | Friday 17 April 2026 06:24:58 +0000 (0:00:00.134) 0:30:01.570 ********** 2026-04-17 06:25:12.103895 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-04-17 06:25:12.103906 | orchestrator | 2026-04-17 06:25:12.103916 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-17 06:25:12.103950 | orchestrator | Friday 17 April 2026 06:24:59 +0000 (0:00:00.228) 0:30:01.799 ********** 2026-04-17 06:25:12.103960 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-04-17 06:25:12.103971 | orchestrator | 2026-04-17 06:25:12.103981 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-17 06:25:12.103992 | orchestrator | Friday 17 April 2026 06:24:59 +0000 (0:00:00.216) 0:30:02.016 ********** 2026-04-17 06:25:12.104003 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:25:12.104014 | orchestrator | 2026-04-17 06:25:12.104025 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-17 06:25:12.104035 | orchestrator | Friday 17 April 2026 06:25:00 +0000 (0:00:01.059) 0:30:03.076 ********** 2026-04-17 06:25:12.104046 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:25:12.104057 | orchestrator | 2026-04-17 06:25:12.104067 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-17 06:25:12.104078 | orchestrator | Friday 17 April 2026 06:25:01 +0000 (0:00:01.264) 0:30:04.340 ********** 2026-04-17 06:25:12.104090 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:25:12.104102 | orchestrator | 2026-04-17 06:25:12.104114 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-17 06:25:12.104126 | orchestrator | Friday 17 April 2026 06:25:02 +0000 (0:00:01.210) 0:30:05.550 ********** 2026-04-17 06:25:12.104139 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 06:25:12.104151 | orchestrator | 2026-04-17 06:25:12.104164 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-17 06:25:12.104176 | orchestrator | 2026-04-17 06:25:12.104188 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:25:12.104208 | orchestrator | Friday 17 April 2026 06:25:05 +0000 (0:00:02.497) 0:30:08.048 ********** 2026-04-17 06:25:12.104221 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-17 06:25:12.104234 | orchestrator | 2026-04-17 06:25:12.104246 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 06:25:12.104259 | orchestrator | Friday 17 April 2026 06:25:05 +0000 (0:00:00.313) 0:30:08.361 ********** 2026-04-17 06:25:12.104271 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:12.104283 | orchestrator | 2026-04-17 06:25:12.104295 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 06:25:12.104308 | orchestrator | Friday 17 April 2026 06:25:06 +0000 (0:00:00.480) 0:30:08.842 ********** 2026-04-17 06:25:12.104320 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:12.104332 | orchestrator | 2026-04-17 06:25:12.104342 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:25:12.104353 | orchestrator | Friday 17 April 2026 06:25:06 +0000 (0:00:00.151) 0:30:08.993 ********** 2026-04-17 06:25:12.104364 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:12.104374 | orchestrator | 2026-04-17 06:25:12.104385 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:25:12.104396 | orchestrator | Friday 17 April 2026 06:25:06 +0000 (0:00:00.504) 0:30:09.498 ********** 2026-04-17 06:25:12.104407 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:12.104418 | orchestrator | 2026-04-17 06:25:12.104444 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 06:25:12.104456 | orchestrator | Friday 17 April 2026 06:25:06 +0000 (0:00:00.155) 0:30:09.654 ********** 2026-04-17 06:25:12.104467 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:12.104478 | orchestrator | 2026-04-17 06:25:12.104488 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 06:25:12.104499 | orchestrator | Friday 17 April 2026 06:25:07 +0000 (0:00:00.150) 0:30:09.804 ********** 2026-04-17 06:25:12.104510 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:12.104520 | orchestrator | 2026-04-17 06:25:12.104531 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 06:25:12.104542 | orchestrator | Friday 17 April 2026 06:25:07 +0000 (0:00:00.161) 0:30:09.966 ********** 2026-04-17 06:25:12.104552 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:12.104563 | orchestrator | 2026-04-17 06:25:12.104573 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 06:25:12.104584 | orchestrator | Friday 17 April 2026 06:25:07 +0000 (0:00:00.493) 0:30:10.460 ********** 2026-04-17 06:25:12.104595 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:12.104605 | orchestrator | 2026-04-17 06:25:12.104616 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 06:25:12.104626 | orchestrator | Friday 17 April 2026 06:25:07 +0000 (0:00:00.146) 0:30:10.607 ********** 2026-04-17 06:25:12.104637 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:25:12.104648 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:25:12.104658 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:25:12.104669 | orchestrator | 2026-04-17 06:25:12.104679 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 06:25:12.104690 | orchestrator | Friday 17 April 2026 06:25:08 +0000 (0:00:00.711) 0:30:11.319 ********** 2026-04-17 06:25:12.104700 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:12.104711 | orchestrator | 2026-04-17 06:25:12.104722 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 06:25:12.104737 | orchestrator | Friday 17 April 2026 06:25:08 +0000 (0:00:00.257) 0:30:11.576 ********** 2026-04-17 06:25:12.104748 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:25:12.104758 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:25:12.104775 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:25:12.104786 | orchestrator | 2026-04-17 06:25:12.104796 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 06:25:12.104807 | orchestrator | Friday 17 April 2026 06:25:10 +0000 (0:00:01.918) 0:30:13.495 ********** 2026-04-17 06:25:12.104818 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 06:25:12.104829 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 06:25:12.104840 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 06:25:12.104851 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:12.104861 | orchestrator | 2026-04-17 06:25:12.104872 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 06:25:12.104882 | orchestrator | Friday 17 April 2026 06:25:11 +0000 (0:00:00.483) 0:30:13.979 ********** 2026-04-17 06:25:12.104894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 06:25:12.104907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 06:25:12.104951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 06:25:12.104964 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:12.104975 | orchestrator | 2026-04-17 06:25:12.104986 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 06:25:12.104996 | orchestrator | Friday 17 April 2026 06:25:12 +0000 (0:00:00.793) 0:30:14.772 ********** 2026-04-17 06:25:12.105009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:12.105030 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:16.596734 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:16.596864 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:16.596884 | orchestrator | 2026-04-17 06:25:16.596897 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 06:25:16.596909 | orchestrator | Friday 17 April 2026 06:25:12 +0000 (0:00:00.164) 0:30:14.937 ********** 2026-04-17 06:25:16.596957 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'b4cdabd05808', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 06:25:09.343392', 'end': '2026-04-17 06:25:09.387808', 'delta': '0:00:00.044416', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b4cdabd05808'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 06:25:16.597015 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '293a28d17cc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 06:25:09.983377', 'end': '2026-04-17 06:25:10.051927', 'delta': '0:00:00.068550', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['293a28d17cc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 06:25:16.597028 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '549053e28e18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 06:25:10.560639', 'end': '2026-04-17 06:25:10.605357', 'delta': '0:00:00.044718', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['549053e28e18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 06:25:16.597039 | orchestrator | 2026-04-17 06:25:16.597050 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 06:25:16.597061 | orchestrator | Friday 17 April 2026 06:25:12 +0000 (0:00:00.227) 0:30:15.164 ********** 2026-04-17 06:25:16.597072 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:16.597084 | orchestrator | 2026-04-17 06:25:16.597095 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 06:25:16.597105 | orchestrator | Friday 17 April 2026 06:25:12 +0000 (0:00:00.252) 0:30:15.417 ********** 2026-04-17 06:25:16.597116 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:16.597127 | orchestrator | 2026-04-17 06:25:16.597167 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 06:25:16.597179 | orchestrator | Friday 17 April 2026 06:25:12 +0000 (0:00:00.238) 0:30:15.655 ********** 2026-04-17 06:25:16.597189 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:16.597200 | orchestrator | 2026-04-17 06:25:16.597211 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 06:25:16.597222 | orchestrator | Friday 17 April 2026 06:25:13 +0000 (0:00:00.157) 0:30:15.812 ********** 2026-04-17 06:25:16.597232 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:25:16.597245 | orchestrator | 2026-04-17 06:25:16.597257 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:25:16.597269 | orchestrator | Friday 17 April 2026 06:25:14 +0000 (0:00:01.383) 0:30:17.196 ********** 2026-04-17 06:25:16.597281 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:16.597293 | orchestrator | 2026-04-17 06:25:16.597304 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 06:25:16.597316 | orchestrator | Friday 17 April 2026 06:25:14 +0000 (0:00:00.533) 0:30:17.729 ********** 2026-04-17 06:25:16.597346 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:16.597360 | orchestrator | 2026-04-17 06:25:16.597373 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 06:25:16.597394 | orchestrator | Friday 17 April 2026 06:25:15 +0000 (0:00:00.147) 0:30:17.877 ********** 2026-04-17 06:25:16.597406 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:16.597419 | orchestrator | 2026-04-17 06:25:16.597431 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 06:25:16.597444 | orchestrator | Friday 17 April 2026 06:25:15 +0000 (0:00:00.230) 0:30:18.107 ********** 2026-04-17 06:25:16.597456 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:16.597469 | orchestrator | 2026-04-17 06:25:16.597481 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 06:25:16.597493 | orchestrator | Friday 17 April 2026 06:25:15 +0000 (0:00:00.150) 0:30:18.258 ********** 2026-04-17 06:25:16.597505 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:16.597517 | orchestrator | 2026-04-17 06:25:16.597529 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 06:25:16.597541 | orchestrator | Friday 17 April 2026 06:25:15 +0000 (0:00:00.142) 0:30:18.401 ********** 2026-04-17 06:25:16.597554 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:16.597567 | orchestrator | 2026-04-17 06:25:16.597578 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 06:25:16.597591 | orchestrator | Friday 17 April 2026 06:25:15 +0000 (0:00:00.175) 0:30:18.576 ********** 2026-04-17 06:25:16.597604 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:16.597616 | orchestrator | 2026-04-17 06:25:16.597629 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 06:25:16.597640 | orchestrator | Friday 17 April 2026 06:25:15 +0000 (0:00:00.146) 0:30:18.722 ********** 2026-04-17 06:25:16.597651 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:16.597662 | orchestrator | 2026-04-17 06:25:16.597673 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 06:25:16.597684 | orchestrator | Friday 17 April 2026 06:25:16 +0000 (0:00:00.178) 0:30:18.901 ********** 2026-04-17 06:25:16.597694 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:16.597705 | orchestrator | 2026-04-17 06:25:16.597722 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 06:25:16.597734 | orchestrator | Friday 17 April 2026 06:25:16 +0000 (0:00:00.136) 0:30:19.037 ********** 2026-04-17 06:25:16.597745 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:16.597756 | orchestrator | 2026-04-17 06:25:16.597767 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 06:25:16.597778 | orchestrator | Friday 17 April 2026 06:25:16 +0000 (0:00:00.189) 0:30:19.227 ********** 2026-04-17 06:25:16.597790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:25:16.597803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'uuids': ['7145b7e9-237d-4eff-af62-82cfb643a183'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS']}})  2026-04-17 06:25:16.597816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ab95973', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:25:16.597844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f']}})  2026-04-17 06:25:16.731490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:25:16.731614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:25:16.731650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-17 06:25:16.731665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:25:16.731677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ', 'dm-uuid-CRYPT-LUKS2-9b48552cb2fb461da2ba0698b00ea049-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:25:16.731689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:25:16.731720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'uuids': ['9b48552c-b2fb-461d-a2ba-0698b00ea049'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ']}})  2026-04-17 06:25:16.731753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d']}})  2026-04-17 06:25:16.731766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:25:16.731788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b9d69c97', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-17 06:25:16.731810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:25:16.731822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-17 06:25:16.731841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS', 'dm-uuid-CRYPT-LUKS2-7145b7e9237d4effaf6282cfb643a183-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-17 06:25:17.089513 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:17.089634 | orchestrator | 2026-04-17 06:25:17.089658 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 06:25:17.089678 | orchestrator | Friday 17 April 2026 06:25:16 +0000 (0:00:00.367) 0:30:19.595 ********** 2026-04-17 06:25:17.089701 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d', 'dm-uuid-LVM-R3uNw0MOs0IVvALnwwNLuTJe4sSwVEyv5FYKu9jO3XL6au8ziCbGkm5eGqnmR8PS'], 'uuids': ['7145b7e9-237d-4eff-af62-82cfb643a183'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134', 'scsi-SQEMU_QEMU_HARDDISK_8ab95973-5989-4e6f-8d83-877ad6e28134'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ab95973', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hg7lx7-RNgr-v11F-9VOR-TZhc-9G3M-Oi4Goe', 'scsi-0QEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac', 'scsi-SQEMU_QEMU_HARDDISK_1b38fc72-8b65-40d7-adf3-b5f2e5cd07ac'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089865 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089882 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-17-02-37-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ', 'dm-uuid-CRYPT-LUKS2-9b48552cb2fb461da2ba0698b00ea049-yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:17.089995 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--690571ed--11b8--555e--b420--011f2882a19f-osd--block--690571ed--11b8--555e--b420--011f2882a19f', 'dm-uuid-LVM-3EQ4UsbmfCExGaWTGQOFAGVqtHkW38ntyoGOyt12uqyfxALEmGDxhGoNkfHZQerQ'], 'uuids': ['9b48552c-b2fb-461d-a2ba-0698b00ea049'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1b38fc72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yoGOyt-12uq-yfxA-LEmG-DxhG-oNkf-HZQerQ']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:20.536871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MAaRAM-GStN-MVQ0-ItuH-mGaz-3psf-r09l2W', 'scsi-0QEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b', 'scsi-SQEMU_QEMU_HARDDISK_0790345e-708b-44d5-b129-73ff7ecdfb8b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0790345e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--58d5b32d--9713--5f24--a4e2--aea701c9df8d-osd--block--58d5b32d--9713--5f24--a4e2--aea701c9df8d']}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:20.537029 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:20.537068 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b9d69c97', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9d69c97-6a14-4810-858c-efad7be3f87e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:20.537103 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:20.537123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:20.537135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS', 'dm-uuid-CRYPT-LUKS2-7145b7e9237d4effaf6282cfb643a183-5FYKu9-jO3X-L6au-8ziC-bGkm-5eGq-nmR8PS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-17 06:25:20.537156 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:20.537169 | orchestrator | 2026-04-17 06:25:20.537181 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 06:25:20.537193 | orchestrator | Friday 17 April 2026 06:25:17 +0000 (0:00:00.442) 0:30:20.037 ********** 2026-04-17 06:25:20.537204 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:20.537216 | orchestrator | 2026-04-17 06:25:20.537227 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 06:25:20.537237 | orchestrator | Friday 17 April 2026 06:25:17 +0000 (0:00:00.510) 0:30:20.547 ********** 2026-04-17 06:25:20.537248 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:20.537259 | orchestrator | 2026-04-17 06:25:20.537274 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:25:20.537285 | orchestrator | Friday 17 April 2026 06:25:18 +0000 (0:00:00.546) 0:30:21.094 ********** 2026-04-17 06:25:20.537296 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:20.537306 | orchestrator | 2026-04-17 06:25:20.537317 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:25:20.537328 | orchestrator | Friday 17 April 2026 06:25:18 +0000 (0:00:00.490) 0:30:21.584 ********** 2026-04-17 06:25:20.537339 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:20.537350 | orchestrator | 2026-04-17 06:25:20.537360 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 06:25:20.537371 | orchestrator | Friday 17 April 2026 06:25:18 +0000 (0:00:00.114) 0:30:21.699 ********** 2026-04-17 06:25:20.537382 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:20.537394 | orchestrator | 2026-04-17 06:25:20.537406 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 06:25:20.537419 | orchestrator | Friday 17 April 2026 06:25:19 +0000 (0:00:00.273) 0:30:21.972 ********** 2026-04-17 06:25:20.537432 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:20.537444 | orchestrator | 2026-04-17 06:25:20.537456 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 06:25:20.537468 | orchestrator | Friday 17 April 2026 06:25:19 +0000 (0:00:00.158) 0:30:22.130 ********** 2026-04-17 06:25:20.537481 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-17 06:25:20.537493 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-17 06:25:20.537506 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-17 06:25:20.537518 | orchestrator | 2026-04-17 06:25:20.537531 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 06:25:20.537543 | orchestrator | Friday 17 April 2026 06:25:20 +0000 (0:00:00.700) 0:30:22.831 ********** 2026-04-17 06:25:20.537555 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 06:25:20.537566 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 06:25:20.537577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 06:25:20.537588 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:20.537599 | orchestrator | 2026-04-17 06:25:20.537609 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 06:25:20.537620 | orchestrator | Friday 17 April 2026 06:25:20 +0000 (0:00:00.194) 0:30:23.025 ********** 2026-04-17 06:25:20.537630 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-17 06:25:20.537642 | orchestrator | 2026-04-17 06:25:20.537660 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:25:36.004768 | orchestrator | Friday 17 April 2026 06:25:20 +0000 (0:00:00.252) 0:30:23.278 ********** 2026-04-17 06:25:36.004923 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.005009 | orchestrator | 2026-04-17 06:25:36.005025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:25:36.005036 | orchestrator | Friday 17 April 2026 06:25:20 +0000 (0:00:00.140) 0:30:23.418 ********** 2026-04-17 06:25:36.005048 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.005059 | orchestrator | 2026-04-17 06:25:36.005070 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:25:36.005080 | orchestrator | Friday 17 April 2026 06:25:20 +0000 (0:00:00.147) 0:30:23.566 ********** 2026-04-17 06:25:36.005091 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.005102 | orchestrator | 2026-04-17 06:25:36.005112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:25:36.005123 | orchestrator | Friday 17 April 2026 06:25:20 +0000 (0:00:00.162) 0:30:23.728 ********** 2026-04-17 06:25:36.005134 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.005145 | orchestrator | 2026-04-17 06:25:36.005170 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:25:36.005181 | orchestrator | Friday 17 April 2026 06:25:21 +0000 (0:00:00.238) 0:30:23.966 ********** 2026-04-17 06:25:36.005192 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:25:36.005204 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:25:36.005215 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:25:36.005226 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.005237 | orchestrator | 2026-04-17 06:25:36.005247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:25:36.005258 | orchestrator | Friday 17 April 2026 06:25:22 +0000 (0:00:01.162) 0:30:25.129 ********** 2026-04-17 06:25:36.005269 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:25:36.005282 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:25:36.005295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:25:36.005307 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.005319 | orchestrator | 2026-04-17 06:25:36.005332 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:25:36.005344 | orchestrator | Friday 17 April 2026 06:25:22 +0000 (0:00:00.420) 0:30:25.550 ********** 2026-04-17 06:25:36.005356 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:25:36.005368 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:25:36.005381 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:25:36.005393 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.005405 | orchestrator | 2026-04-17 06:25:36.005418 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:25:36.005431 | orchestrator | Friday 17 April 2026 06:25:23 +0000 (0:00:00.407) 0:30:25.958 ********** 2026-04-17 06:25:36.005443 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.005456 | orchestrator | 2026-04-17 06:25:36.005468 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:25:36.005481 | orchestrator | Friday 17 April 2026 06:25:23 +0000 (0:00:00.169) 0:30:26.127 ********** 2026-04-17 06:25:36.005493 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 06:25:36.005506 | orchestrator | 2026-04-17 06:25:36.005519 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 06:25:36.005531 | orchestrator | Friday 17 April 2026 06:25:23 +0000 (0:00:00.360) 0:30:26.488 ********** 2026-04-17 06:25:36.005543 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:25:36.005557 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:25:36.005569 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:25:36.005582 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:25:36.005603 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:25:36.005616 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-17 06:25:36.005629 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:25:36.005641 | orchestrator | 2026-04-17 06:25:36.005652 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 06:25:36.005663 | orchestrator | Friday 17 April 2026 06:25:24 +0000 (0:00:00.847) 0:30:27.336 ********** 2026-04-17 06:25:36.005673 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 06:25:36.005684 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 06:25:36.005694 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 06:25:36.005705 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-17 06:25:36.005715 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 06:25:36.005726 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-17 06:25:36.005737 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 06:25:36.005748 | orchestrator | 2026-04-17 06:25:36.005758 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-17 06:25:36.005769 | orchestrator | Friday 17 April 2026 06:25:26 +0000 (0:00:01.836) 0:30:29.172 ********** 2026-04-17 06:25:36.005787 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:25:36.005807 | orchestrator | 2026-04-17 06:25:36.005849 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-17 06:25:36.005870 | orchestrator | Friday 17 April 2026 06:25:27 +0000 (0:00:01.285) 0:30:30.459 ********** 2026-04-17 06:25:36.005889 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:25:36.005911 | orchestrator | 2026-04-17 06:25:36.005930 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-17 06:25:36.005975 | orchestrator | Friday 17 April 2026 06:25:29 +0000 (0:00:01.838) 0:30:32.298 ********** 2026-04-17 06:25:36.005994 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:25:36.006098 | orchestrator | 2026-04-17 06:25:36.006124 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:25:36.006135 | orchestrator | Friday 17 April 2026 06:25:30 +0000 (0:00:01.228) 0:30:33.526 ********** 2026-04-17 06:25:36.006154 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-17 06:25:36.006165 | orchestrator | 2026-04-17 06:25:36.006176 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:25:36.006187 | orchestrator | Friday 17 April 2026 06:25:30 +0000 (0:00:00.210) 0:30:33.737 ********** 2026-04-17 06:25:36.006198 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-17 06:25:36.006209 | orchestrator | 2026-04-17 06:25:36.006220 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:25:36.006231 | orchestrator | Friday 17 April 2026 06:25:31 +0000 (0:00:00.548) 0:30:34.285 ********** 2026-04-17 06:25:36.006241 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.006253 | orchestrator | 2026-04-17 06:25:36.006263 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:25:36.006274 | orchestrator | Friday 17 April 2026 06:25:31 +0000 (0:00:00.167) 0:30:34.453 ********** 2026-04-17 06:25:36.006285 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.006296 | orchestrator | 2026-04-17 06:25:36.006307 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:25:36.006328 | orchestrator | Friday 17 April 2026 06:25:32 +0000 (0:00:00.506) 0:30:34.960 ********** 2026-04-17 06:25:36.006339 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.006349 | orchestrator | 2026-04-17 06:25:36.006360 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:25:36.006371 | orchestrator | Friday 17 April 2026 06:25:32 +0000 (0:00:00.531) 0:30:35.492 ********** 2026-04-17 06:25:36.006382 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.006392 | orchestrator | 2026-04-17 06:25:36.006403 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:25:36.006414 | orchestrator | Friday 17 April 2026 06:25:33 +0000 (0:00:00.541) 0:30:36.033 ********** 2026-04-17 06:25:36.006424 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.006435 | orchestrator | 2026-04-17 06:25:36.006446 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:25:36.006456 | orchestrator | Friday 17 April 2026 06:25:33 +0000 (0:00:00.140) 0:30:36.174 ********** 2026-04-17 06:25:36.006467 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.006478 | orchestrator | 2026-04-17 06:25:36.006489 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:25:36.006499 | orchestrator | Friday 17 April 2026 06:25:33 +0000 (0:00:00.127) 0:30:36.301 ********** 2026-04-17 06:25:36.006510 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.006521 | orchestrator | 2026-04-17 06:25:36.006531 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:25:36.006542 | orchestrator | Friday 17 April 2026 06:25:33 +0000 (0:00:00.123) 0:30:36.424 ********** 2026-04-17 06:25:36.006553 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.006563 | orchestrator | 2026-04-17 06:25:36.006574 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:25:36.006585 | orchestrator | Friday 17 April 2026 06:25:34 +0000 (0:00:00.522) 0:30:36.947 ********** 2026-04-17 06:25:36.006596 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.006606 | orchestrator | 2026-04-17 06:25:36.006617 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:25:36.006628 | orchestrator | Friday 17 April 2026 06:25:34 +0000 (0:00:00.561) 0:30:37.508 ********** 2026-04-17 06:25:36.006638 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.006649 | orchestrator | 2026-04-17 06:25:36.006660 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:25:36.006670 | orchestrator | Friday 17 April 2026 06:25:34 +0000 (0:00:00.130) 0:30:37.639 ********** 2026-04-17 06:25:36.006681 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.006692 | orchestrator | 2026-04-17 06:25:36.006702 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:25:36.006713 | orchestrator | Friday 17 April 2026 06:25:35 +0000 (0:00:00.134) 0:30:37.774 ********** 2026-04-17 06:25:36.006724 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.006735 | orchestrator | 2026-04-17 06:25:36.006745 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:25:36.006756 | orchestrator | Friday 17 April 2026 06:25:35 +0000 (0:00:00.144) 0:30:37.919 ********** 2026-04-17 06:25:36.006767 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.006777 | orchestrator | 2026-04-17 06:25:36.006788 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:25:36.006799 | orchestrator | Friday 17 April 2026 06:25:35 +0000 (0:00:00.526) 0:30:38.446 ********** 2026-04-17 06:25:36.006809 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:36.006820 | orchestrator | 2026-04-17 06:25:36.006831 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:25:36.006841 | orchestrator | Friday 17 April 2026 06:25:35 +0000 (0:00:00.171) 0:30:38.617 ********** 2026-04-17 06:25:36.006852 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:36.006863 | orchestrator | 2026-04-17 06:25:36.006885 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:25:47.877089 | orchestrator | Friday 17 April 2026 06:25:35 +0000 (0:00:00.121) 0:30:38.738 ********** 2026-04-17 06:25:47.877205 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877223 | orchestrator | 2026-04-17 06:25:47.877236 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:25:47.877248 | orchestrator | Friday 17 April 2026 06:25:36 +0000 (0:00:00.142) 0:30:38.881 ********** 2026-04-17 06:25:47.877259 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877270 | orchestrator | 2026-04-17 06:25:47.877281 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:25:47.877292 | orchestrator | Friday 17 April 2026 06:25:36 +0000 (0:00:00.144) 0:30:39.025 ********** 2026-04-17 06:25:47.877303 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:47.877314 | orchestrator | 2026-04-17 06:25:47.877325 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:25:47.877336 | orchestrator | Friday 17 April 2026 06:25:36 +0000 (0:00:00.154) 0:30:39.180 ********** 2026-04-17 06:25:47.877346 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:47.877357 | orchestrator | 2026-04-17 06:25:47.877384 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-17 06:25:47.877395 | orchestrator | Friday 17 April 2026 06:25:36 +0000 (0:00:00.232) 0:30:39.413 ********** 2026-04-17 06:25:47.877406 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877417 | orchestrator | 2026-04-17 06:25:47.877428 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-17 06:25:47.877438 | orchestrator | Friday 17 April 2026 06:25:36 +0000 (0:00:00.117) 0:30:39.530 ********** 2026-04-17 06:25:47.877449 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877461 | orchestrator | 2026-04-17 06:25:47.877472 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-17 06:25:47.877483 | orchestrator | Friday 17 April 2026 06:25:36 +0000 (0:00:00.140) 0:30:39.671 ********** 2026-04-17 06:25:47.877494 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877505 | orchestrator | 2026-04-17 06:25:47.877515 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-17 06:25:47.877526 | orchestrator | Friday 17 April 2026 06:25:37 +0000 (0:00:00.150) 0:30:39.822 ********** 2026-04-17 06:25:47.877537 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877547 | orchestrator | 2026-04-17 06:25:47.877558 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-17 06:25:47.877569 | orchestrator | Friday 17 April 2026 06:25:37 +0000 (0:00:00.130) 0:30:39.953 ********** 2026-04-17 06:25:47.877580 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877590 | orchestrator | 2026-04-17 06:25:47.877601 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-17 06:25:47.877612 | orchestrator | Friday 17 April 2026 06:25:37 +0000 (0:00:00.139) 0:30:40.092 ********** 2026-04-17 06:25:47.877622 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877633 | orchestrator | 2026-04-17 06:25:47.877644 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-17 06:25:47.877655 | orchestrator | Friday 17 April 2026 06:25:37 +0000 (0:00:00.504) 0:30:40.596 ********** 2026-04-17 06:25:47.877665 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877676 | orchestrator | 2026-04-17 06:25:47.877687 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-17 06:25:47.877698 | orchestrator | Friday 17 April 2026 06:25:37 +0000 (0:00:00.136) 0:30:40.732 ********** 2026-04-17 06:25:47.877709 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877720 | orchestrator | 2026-04-17 06:25:47.877731 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-17 06:25:47.877741 | orchestrator | Friday 17 April 2026 06:25:38 +0000 (0:00:00.152) 0:30:40.884 ********** 2026-04-17 06:25:47.877752 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877763 | orchestrator | 2026-04-17 06:25:47.877773 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-17 06:25:47.877808 | orchestrator | Friday 17 April 2026 06:25:38 +0000 (0:00:00.140) 0:30:41.025 ********** 2026-04-17 06:25:47.877819 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877830 | orchestrator | 2026-04-17 06:25:47.877841 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-17 06:25:47.877851 | orchestrator | Friday 17 April 2026 06:25:38 +0000 (0:00:00.137) 0:30:41.163 ********** 2026-04-17 06:25:47.877862 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877873 | orchestrator | 2026-04-17 06:25:47.877883 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-17 06:25:47.877894 | orchestrator | Friday 17 April 2026 06:25:38 +0000 (0:00:00.157) 0:30:41.320 ********** 2026-04-17 06:25:47.877905 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.877915 | orchestrator | 2026-04-17 06:25:47.877926 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 06:25:47.877937 | orchestrator | Friday 17 April 2026 06:25:38 +0000 (0:00:00.225) 0:30:41.546 ********** 2026-04-17 06:25:47.877970 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:47.877981 | orchestrator | 2026-04-17 06:25:47.877993 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 06:25:47.878003 | orchestrator | Friday 17 April 2026 06:25:39 +0000 (0:00:00.905) 0:30:42.452 ********** 2026-04-17 06:25:47.878079 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:47.878094 | orchestrator | 2026-04-17 06:25:47.878105 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 06:25:47.878116 | orchestrator | Friday 17 April 2026 06:25:40 +0000 (0:00:01.191) 0:30:43.644 ********** 2026-04-17 06:25:47.878126 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-17 06:25:47.878139 | orchestrator | 2026-04-17 06:25:47.878149 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 06:25:47.878160 | orchestrator | Friday 17 April 2026 06:25:41 +0000 (0:00:00.226) 0:30:43.870 ********** 2026-04-17 06:25:47.878170 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878181 | orchestrator | 2026-04-17 06:25:47.878192 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 06:25:47.878220 | orchestrator | Friday 17 April 2026 06:25:41 +0000 (0:00:00.151) 0:30:44.021 ********** 2026-04-17 06:25:47.878232 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878243 | orchestrator | 2026-04-17 06:25:47.878254 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 06:25:47.878265 | orchestrator | Friday 17 April 2026 06:25:41 +0000 (0:00:00.503) 0:30:44.525 ********** 2026-04-17 06:25:47.878275 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 06:25:47.878286 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 06:25:47.878297 | orchestrator | 2026-04-17 06:25:47.878308 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 06:25:47.878318 | orchestrator | Friday 17 April 2026 06:25:42 +0000 (0:00:00.801) 0:30:45.326 ********** 2026-04-17 06:25:47.878329 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:47.878340 | orchestrator | 2026-04-17 06:25:47.878351 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 06:25:47.878367 | orchestrator | Friday 17 April 2026 06:25:43 +0000 (0:00:00.468) 0:30:45.795 ********** 2026-04-17 06:25:47.878378 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878389 | orchestrator | 2026-04-17 06:25:47.878400 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 06:25:47.878410 | orchestrator | Friday 17 April 2026 06:25:43 +0000 (0:00:00.150) 0:30:45.945 ********** 2026-04-17 06:25:47.878421 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878432 | orchestrator | 2026-04-17 06:25:47.878442 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 06:25:47.878453 | orchestrator | Friday 17 April 2026 06:25:43 +0000 (0:00:00.165) 0:30:46.111 ********** 2026-04-17 06:25:47.878473 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878484 | orchestrator | 2026-04-17 06:25:47.878495 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 06:25:47.878505 | orchestrator | Friday 17 April 2026 06:25:43 +0000 (0:00:00.136) 0:30:46.247 ********** 2026-04-17 06:25:47.878516 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-17 06:25:47.878527 | orchestrator | 2026-04-17 06:25:47.878538 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 06:25:47.878548 | orchestrator | Friday 17 April 2026 06:25:43 +0000 (0:00:00.234) 0:30:46.481 ********** 2026-04-17 06:25:47.878559 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:47.878570 | orchestrator | 2026-04-17 06:25:47.878581 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 06:25:47.878591 | orchestrator | Friday 17 April 2026 06:25:44 +0000 (0:00:00.734) 0:30:47.216 ********** 2026-04-17 06:25:47.878602 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 06:25:47.878613 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 06:25:47.878623 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 06:25:47.878634 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878645 | orchestrator | 2026-04-17 06:25:47.878656 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 06:25:47.878666 | orchestrator | Friday 17 April 2026 06:25:44 +0000 (0:00:00.176) 0:30:47.392 ********** 2026-04-17 06:25:47.878677 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878688 | orchestrator | 2026-04-17 06:25:47.878699 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 06:25:47.878709 | orchestrator | Friday 17 April 2026 06:25:44 +0000 (0:00:00.141) 0:30:47.534 ********** 2026-04-17 06:25:47.878720 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878730 | orchestrator | 2026-04-17 06:25:47.878741 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 06:25:47.878752 | orchestrator | Friday 17 April 2026 06:25:44 +0000 (0:00:00.170) 0:30:47.704 ********** 2026-04-17 06:25:47.878763 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878773 | orchestrator | 2026-04-17 06:25:47.878784 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 06:25:47.878794 | orchestrator | Friday 17 April 2026 06:25:45 +0000 (0:00:00.163) 0:30:47.868 ********** 2026-04-17 06:25:47.878805 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878816 | orchestrator | 2026-04-17 06:25:47.878827 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 06:25:47.878837 | orchestrator | Friday 17 April 2026 06:25:45 +0000 (0:00:00.530) 0:30:48.398 ********** 2026-04-17 06:25:47.878848 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.878858 | orchestrator | 2026-04-17 06:25:47.878869 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 06:25:47.878880 | orchestrator | Friday 17 April 2026 06:25:45 +0000 (0:00:00.157) 0:30:48.556 ********** 2026-04-17 06:25:47.878890 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:47.878901 | orchestrator | 2026-04-17 06:25:47.878911 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 06:25:47.878922 | orchestrator | Friday 17 April 2026 06:25:47 +0000 (0:00:01.488) 0:30:50.045 ********** 2026-04-17 06:25:47.878933 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:25:47.878943 | orchestrator | 2026-04-17 06:25:47.878972 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 06:25:47.878983 | orchestrator | Friday 17 April 2026 06:25:47 +0000 (0:00:00.160) 0:30:50.205 ********** 2026-04-17 06:25:47.878994 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-17 06:25:47.879005 | orchestrator | 2026-04-17 06:25:47.879022 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 06:25:47.879033 | orchestrator | Friday 17 April 2026 06:25:47 +0000 (0:00:00.228) 0:30:50.434 ********** 2026-04-17 06:25:47.879044 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:25:47.879055 | orchestrator | 2026-04-17 06:25:47.879066 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 06:25:47.879083 | orchestrator | Friday 17 April 2026 06:25:47 +0000 (0:00:00.177) 0:30:50.611 ********** 2026-04-17 06:26:06.691789 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.691885 | orchestrator | 2026-04-17 06:26:06.691897 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 06:26:06.691906 | orchestrator | Friday 17 April 2026 06:25:48 +0000 (0:00:00.176) 0:30:50.788 ********** 2026-04-17 06:26:06.691913 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.691920 | orchestrator | 2026-04-17 06:26:06.691927 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 06:26:06.691934 | orchestrator | Friday 17 April 2026 06:25:48 +0000 (0:00:00.161) 0:30:50.949 ********** 2026-04-17 06:26:06.691941 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.691948 | orchestrator | 2026-04-17 06:26:06.691954 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 06:26:06.691992 | orchestrator | Friday 17 April 2026 06:25:48 +0000 (0:00:00.143) 0:30:51.093 ********** 2026-04-17 06:26:06.692000 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692006 | orchestrator | 2026-04-17 06:26:06.692026 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 06:26:06.692033 | orchestrator | Friday 17 April 2026 06:25:48 +0000 (0:00:00.149) 0:30:51.243 ********** 2026-04-17 06:26:06.692040 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692047 | orchestrator | 2026-04-17 06:26:06.692053 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 06:26:06.692060 | orchestrator | Friday 17 April 2026 06:25:48 +0000 (0:00:00.153) 0:30:51.396 ********** 2026-04-17 06:26:06.692066 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692073 | orchestrator | 2026-04-17 06:26:06.692080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 06:26:06.692086 | orchestrator | Friday 17 April 2026 06:25:48 +0000 (0:00:00.164) 0:30:51.560 ********** 2026-04-17 06:26:06.692093 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692099 | orchestrator | 2026-04-17 06:26:06.692106 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 06:26:06.692112 | orchestrator | Friday 17 April 2026 06:25:48 +0000 (0:00:00.146) 0:30:51.707 ********** 2026-04-17 06:26:06.692119 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:26:06.692127 | orchestrator | 2026-04-17 06:26:06.692133 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 06:26:06.692140 | orchestrator | Friday 17 April 2026 06:25:49 +0000 (0:00:00.596) 0:30:52.304 ********** 2026-04-17 06:26:06.692147 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-17 06:26:06.692155 | orchestrator | 2026-04-17 06:26:06.692161 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 06:26:06.692168 | orchestrator | Friday 17 April 2026 06:25:49 +0000 (0:00:00.194) 0:30:52.498 ********** 2026-04-17 06:26:06.692175 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-17 06:26:06.692182 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-17 06:26:06.692188 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-17 06:26:06.692195 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-17 06:26:06.692201 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-17 06:26:06.692207 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-17 06:26:06.692214 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-17 06:26:06.692221 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-17 06:26:06.692242 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 06:26:06.692248 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 06:26:06.692255 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 06:26:06.692261 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 06:26:06.692268 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 06:26:06.692274 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 06:26:06.692281 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-17 06:26:06.692287 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-17 06:26:06.692294 | orchestrator | 2026-04-17 06:26:06.692300 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 06:26:06.692306 | orchestrator | Friday 17 April 2026 06:25:55 +0000 (0:00:05.311) 0:30:57.809 ********** 2026-04-17 06:26:06.692313 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-17 06:26:06.692320 | orchestrator | 2026-04-17 06:26:06.692326 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 06:26:06.692333 | orchestrator | Friday 17 April 2026 06:25:55 +0000 (0:00:00.235) 0:30:58.044 ********** 2026-04-17 06:26:06.692339 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:26:06.692347 | orchestrator | 2026-04-17 06:26:06.692354 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 06:26:06.692360 | orchestrator | Friday 17 April 2026 06:25:55 +0000 (0:00:00.505) 0:30:58.550 ********** 2026-04-17 06:26:06.692367 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:26:06.692373 | orchestrator | 2026-04-17 06:26:06.692380 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 06:26:06.692386 | orchestrator | Friday 17 April 2026 06:25:56 +0000 (0:00:01.038) 0:30:59.588 ********** 2026-04-17 06:26:06.692393 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692399 | orchestrator | 2026-04-17 06:26:06.692406 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 06:26:06.692425 | orchestrator | Friday 17 April 2026 06:25:56 +0000 (0:00:00.137) 0:30:59.725 ********** 2026-04-17 06:26:06.692432 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692439 | orchestrator | 2026-04-17 06:26:06.692445 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 06:26:06.692452 | orchestrator | Friday 17 April 2026 06:25:57 +0000 (0:00:00.134) 0:30:59.860 ********** 2026-04-17 06:26:06.692459 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692465 | orchestrator | 2026-04-17 06:26:06.692472 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 06:26:06.692478 | orchestrator | Friday 17 April 2026 06:25:57 +0000 (0:00:00.132) 0:30:59.992 ********** 2026-04-17 06:26:06.692485 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692491 | orchestrator | 2026-04-17 06:26:06.692498 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 06:26:06.692504 | orchestrator | Friday 17 April 2026 06:25:57 +0000 (0:00:00.130) 0:31:00.123 ********** 2026-04-17 06:26:06.692511 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692521 | orchestrator | 2026-04-17 06:26:06.692528 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 06:26:06.692534 | orchestrator | Friday 17 April 2026 06:25:57 +0000 (0:00:00.479) 0:31:00.602 ********** 2026-04-17 06:26:06.692541 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692547 | orchestrator | 2026-04-17 06:26:06.692554 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 06:26:06.692565 | orchestrator | Friday 17 April 2026 06:25:58 +0000 (0:00:00.163) 0:31:00.765 ********** 2026-04-17 06:26:06.692572 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692578 | orchestrator | 2026-04-17 06:26:06.692585 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 06:26:06.692592 | orchestrator | Friday 17 April 2026 06:25:58 +0000 (0:00:00.197) 0:31:00.963 ********** 2026-04-17 06:26:06.692598 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692605 | orchestrator | 2026-04-17 06:26:06.692611 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 06:26:06.692618 | orchestrator | Friday 17 April 2026 06:25:58 +0000 (0:00:00.159) 0:31:01.122 ********** 2026-04-17 06:26:06.692624 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692631 | orchestrator | 2026-04-17 06:26:06.692637 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 06:26:06.692644 | orchestrator | Friday 17 April 2026 06:25:58 +0000 (0:00:00.128) 0:31:01.251 ********** 2026-04-17 06:26:06.692650 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692657 | orchestrator | 2026-04-17 06:26:06.692663 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 06:26:06.692670 | orchestrator | Friday 17 April 2026 06:25:58 +0000 (0:00:00.144) 0:31:01.395 ********** 2026-04-17 06:26:06.692676 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692683 | orchestrator | 2026-04-17 06:26:06.692689 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 06:26:06.692696 | orchestrator | Friday 17 April 2026 06:25:58 +0000 (0:00:00.181) 0:31:01.576 ********** 2026-04-17 06:26:06.692702 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-17 06:26:06.692709 | orchestrator | 2026-04-17 06:26:06.692715 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 06:26:06.692722 | orchestrator | Friday 17 April 2026 06:26:02 +0000 (0:00:03.455) 0:31:05.032 ********** 2026-04-17 06:26:06.692728 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:26:06.692735 | orchestrator | 2026-04-17 06:26:06.692742 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 06:26:06.692748 | orchestrator | Friday 17 April 2026 06:26:02 +0000 (0:00:00.199) 0:31:05.231 ********** 2026-04-17 06:26:06.692757 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-17 06:26:06.692767 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-17 06:26:06.692774 | orchestrator | 2026-04-17 06:26:06.692781 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 06:26:06.692788 | orchestrator | Friday 17 April 2026 06:26:06 +0000 (0:00:03.761) 0:31:08.993 ********** 2026-04-17 06:26:06.692794 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692801 | orchestrator | 2026-04-17 06:26:06.692807 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 06:26:06.692814 | orchestrator | Friday 17 April 2026 06:26:06 +0000 (0:00:00.143) 0:31:09.137 ********** 2026-04-17 06:26:06.692820 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692827 | orchestrator | 2026-04-17 06:26:06.692833 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 06:26:06.692844 | orchestrator | Friday 17 April 2026 06:26:06 +0000 (0:00:00.126) 0:31:09.263 ********** 2026-04-17 06:26:06.692851 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:06.692857 | orchestrator | 2026-04-17 06:26:06.692864 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 06:26:06.692875 | orchestrator | Friday 17 April 2026 06:26:06 +0000 (0:00:00.163) 0:31:09.427 ********** 2026-04-17 06:26:55.488387 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.488516 | orchestrator | 2026-04-17 06:26:55.488544 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 06:26:55.488566 | orchestrator | Friday 17 April 2026 06:26:07 +0000 (0:00:00.527) 0:31:09.954 ********** 2026-04-17 06:26:55.488584 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.488595 | orchestrator | 2026-04-17 06:26:55.488607 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 06:26:55.488617 | orchestrator | Friday 17 April 2026 06:26:07 +0000 (0:00:00.169) 0:31:10.124 ********** 2026-04-17 06:26:55.488628 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:26:55.488639 | orchestrator | 2026-04-17 06:26:55.488650 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 06:26:55.488676 | orchestrator | Friday 17 April 2026 06:26:07 +0000 (0:00:00.264) 0:31:10.388 ********** 2026-04-17 06:26:55.488688 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:26:55.488699 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:26:55.488710 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:26:55.488720 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.488731 | orchestrator | 2026-04-17 06:26:55.488742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 06:26:55.488753 | orchestrator | Friday 17 April 2026 06:26:08 +0000 (0:00:00.478) 0:31:10.866 ********** 2026-04-17 06:26:55.488763 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:26:55.488774 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:26:55.488785 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:26:55.488795 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.488806 | orchestrator | 2026-04-17 06:26:55.488816 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 06:26:55.488827 | orchestrator | Friday 17 April 2026 06:26:08 +0000 (0:00:00.433) 0:31:11.300 ********** 2026-04-17 06:26:55.488838 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 06:26:55.488848 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 06:26:55.488859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 06:26:55.488870 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.488880 | orchestrator | 2026-04-17 06:26:55.488897 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 06:26:55.488914 | orchestrator | Friday 17 April 2026 06:26:08 +0000 (0:00:00.437) 0:31:11.737 ********** 2026-04-17 06:26:55.488934 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:26:55.488953 | orchestrator | 2026-04-17 06:26:55.488969 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 06:26:55.488980 | orchestrator | Friday 17 April 2026 06:26:09 +0000 (0:00:00.185) 0:31:11.923 ********** 2026-04-17 06:26:55.489021 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 06:26:55.489040 | orchestrator | 2026-04-17 06:26:55.489058 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 06:26:55.489077 | orchestrator | Friday 17 April 2026 06:26:09 +0000 (0:00:00.435) 0:31:12.358 ********** 2026-04-17 06:26:55.489095 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:26:55.489114 | orchestrator | 2026-04-17 06:26:55.489132 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-17 06:26:55.489151 | orchestrator | Friday 17 April 2026 06:26:10 +0000 (0:00:00.831) 0:31:13.189 ********** 2026-04-17 06:26:55.489199 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-04-17 06:26:55.489212 | orchestrator | 2026-04-17 06:26:55.489223 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 06:26:55.489233 | orchestrator | Friday 17 April 2026 06:26:10 +0000 (0:00:00.274) 0:31:13.463 ********** 2026-04-17 06:26:55.489244 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:26:55.489254 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 06:26:55.489265 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:26:55.489276 | orchestrator | 2026-04-17 06:26:55.489287 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:26:55.489297 | orchestrator | Friday 17 April 2026 06:26:13 +0000 (0:00:03.019) 0:31:16.483 ********** 2026-04-17 06:26:55.489308 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-17 06:26:55.489319 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 06:26:55.489329 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:26:55.489340 | orchestrator | 2026-04-17 06:26:55.489350 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-17 06:26:55.489361 | orchestrator | Friday 17 April 2026 06:26:14 +0000 (0:00:00.979) 0:31:17.462 ********** 2026-04-17 06:26:55.489372 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.489382 | orchestrator | 2026-04-17 06:26:55.489393 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-17 06:26:55.489403 | orchestrator | Friday 17 April 2026 06:26:14 +0000 (0:00:00.141) 0:31:17.604 ********** 2026-04-17 06:26:55.489415 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-04-17 06:26:55.489427 | orchestrator | 2026-04-17 06:26:55.489437 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-17 06:26:55.489448 | orchestrator | Friday 17 April 2026 06:26:15 +0000 (0:00:00.223) 0:31:17.828 ********** 2026-04-17 06:26:55.489460 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:26:55.489472 | orchestrator | 2026-04-17 06:26:55.489483 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-17 06:26:55.489493 | orchestrator | Friday 17 April 2026 06:26:15 +0000 (0:00:00.660) 0:31:18.488 ********** 2026-04-17 06:26:55.489523 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:26:55.489536 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 06:26:55.489547 | orchestrator | 2026-04-17 06:26:55.489558 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 06:26:55.489568 | orchestrator | Friday 17 April 2026 06:26:19 +0000 (0:00:04.048) 0:31:22.536 ********** 2026-04-17 06:26:55.489579 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 06:26:55.489590 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 06:26:55.489600 | orchestrator | 2026-04-17 06:26:55.489611 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 06:26:55.489628 | orchestrator | Friday 17 April 2026 06:26:21 +0000 (0:00:01.990) 0:31:24.527 ********** 2026-04-17 06:26:55.489639 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-17 06:26:55.489650 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:26:55.489661 | orchestrator | 2026-04-17 06:26:55.489672 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-17 06:26:55.489682 | orchestrator | Friday 17 April 2026 06:26:22 +0000 (0:00:01.049) 0:31:25.577 ********** 2026-04-17 06:26:55.489693 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-04-17 06:26:55.489703 | orchestrator | 2026-04-17 06:26:55.489714 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-17 06:26:55.489733 | orchestrator | Friday 17 April 2026 06:26:23 +0000 (0:00:00.237) 0:31:25.814 ********** 2026-04-17 06:26:55.489743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489798 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.489808 | orchestrator | 2026-04-17 06:26:55.489819 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-17 06:26:55.489829 | orchestrator | Friday 17 April 2026 06:26:24 +0000 (0:00:01.043) 0:31:26.858 ********** 2026-04-17 06:26:55.489840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 06:26:55.489893 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.489903 | orchestrator | 2026-04-17 06:26:55.489914 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-17 06:26:55.489925 | orchestrator | Friday 17 April 2026 06:26:25 +0000 (0:00:00.977) 0:31:27.835 ********** 2026-04-17 06:26:55.489936 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:26:55.489947 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:26:55.489957 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:26:55.489968 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:26:55.489979 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 06:26:55.489990 | orchestrator | 2026-04-17 06:26:55.490125 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-17 06:26:55.490138 | orchestrator | Friday 17 April 2026 06:26:55 +0000 (0:00:30.247) 0:31:58.083 ********** 2026-04-17 06:26:55.490148 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:26:55.490159 | orchestrator | 2026-04-17 06:26:55.490170 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-17 06:26:55.490191 | orchestrator | Friday 17 April 2026 06:26:55 +0000 (0:00:00.137) 0:31:58.221 ********** 2026-04-17 06:27:25.694841 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:25.694978 | orchestrator | 2026-04-17 06:27:25.694995 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-17 06:27:25.695007 | orchestrator | Friday 17 April 2026 06:26:55 +0000 (0:00:00.153) 0:31:58.374 ********** 2026-04-17 06:27:25.695066 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-04-17 06:27:25.695078 | orchestrator | 2026-04-17 06:27:25.695090 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-17 06:27:25.695101 | orchestrator | Friday 17 April 2026 06:26:55 +0000 (0:00:00.230) 0:31:58.604 ********** 2026-04-17 06:27:25.695112 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-04-17 06:27:25.695123 | orchestrator | 2026-04-17 06:27:25.695147 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-17 06:27:25.695158 | orchestrator | Friday 17 April 2026 06:26:56 +0000 (0:00:00.209) 0:31:58.814 ********** 2026-04-17 06:27:25.695169 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.695180 | orchestrator | 2026-04-17 06:27:25.695191 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-17 06:27:25.695202 | orchestrator | Friday 17 April 2026 06:26:57 +0000 (0:00:01.072) 0:31:59.887 ********** 2026-04-17 06:27:25.695212 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.695223 | orchestrator | 2026-04-17 06:27:25.695234 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-17 06:27:25.695244 | orchestrator | Friday 17 April 2026 06:26:58 +0000 (0:00:01.048) 0:32:00.935 ********** 2026-04-17 06:27:25.695255 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.695265 | orchestrator | 2026-04-17 06:27:25.695276 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-17 06:27:25.695287 | orchestrator | Friday 17 April 2026 06:26:59 +0000 (0:00:01.304) 0:32:02.240 ********** 2026-04-17 06:27:25.695298 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 06:27:25.695311 | orchestrator | 2026-04-17 06:27:25.695321 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-04-17 06:27:25.695333 | orchestrator | skipping: no hosts matched 2026-04-17 06:27:25.695345 | orchestrator | 2026-04-17 06:27:25.695356 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-04-17 06:27:25.695366 | orchestrator | skipping: no hosts matched 2026-04-17 06:27:25.695377 | orchestrator | 2026-04-17 06:27:25.695390 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-04-17 06:27:25.695403 | orchestrator | skipping: no hosts matched 2026-04-17 06:27:25.695415 | orchestrator | 2026-04-17 06:27:25.695427 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-04-17 06:27:25.695439 | orchestrator | 2026-04-17 06:27:25.695451 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-04-17 06:27:25.695464 | orchestrator | Friday 17 April 2026 06:27:03 +0000 (0:00:03.537) 0:32:05.777 ********** 2026-04-17 06:27:25.695476 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:27:25.695488 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:27:25.695500 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:27:25.695512 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:27:25.695524 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:27:25.695537 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:27:25.695549 | orchestrator | 2026-04-17 06:27:25.695562 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-04-17 06:27:25.695574 | orchestrator | Friday 17 April 2026 06:27:04 +0000 (0:00:01.907) 0:32:07.685 ********** 2026-04-17 06:27:25.695586 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:27:25.695598 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:27:25.695610 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:27:25.695622 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:27:25.695634 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:27:25.695646 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:27:25.695667 | orchestrator | 2026-04-17 06:27:25.695680 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:27:25.695693 | orchestrator | Friday 17 April 2026 06:27:07 +0000 (0:00:02.627) 0:32:10.312 ********** 2026-04-17 06:27:25.695706 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:25.695718 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:25.695731 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:25.695742 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:25.695752 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:25.695763 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.695773 | orchestrator | 2026-04-17 06:27:25.695784 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:27:25.695795 | orchestrator | Friday 17 April 2026 06:27:08 +0000 (0:00:01.046) 0:32:11.359 ********** 2026-04-17 06:27:25.695806 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:25.695816 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:25.695827 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:25.695837 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:25.695848 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:25.695858 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.695869 | orchestrator | 2026-04-17 06:27:25.695880 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 06:27:25.695891 | orchestrator | Friday 17 April 2026 06:27:10 +0000 (0:00:01.550) 0:32:12.909 ********** 2026-04-17 06:27:25.695903 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:27:25.695916 | orchestrator | 2026-04-17 06:27:25.695926 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 06:27:25.695937 | orchestrator | Friday 17 April 2026 06:27:11 +0000 (0:00:01.691) 0:32:14.600 ********** 2026-04-17 06:27:25.695949 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:27:25.695960 | orchestrator | 2026-04-17 06:27:25.695987 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 06:27:25.695998 | orchestrator | Friday 17 April 2026 06:27:13 +0000 (0:00:01.651) 0:32:16.251 ********** 2026-04-17 06:27:25.696009 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:25.696040 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:25.696051 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:25.696062 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:25.696073 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:25.696084 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:25.696095 | orchestrator | 2026-04-17 06:27:25.696105 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 06:27:25.696116 | orchestrator | Friday 17 April 2026 06:27:14 +0000 (0:00:00.860) 0:32:17.112 ********** 2026-04-17 06:27:25.696127 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:25.696137 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:25.696153 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:25.696164 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:25.696175 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:25.696185 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.696196 | orchestrator | 2026-04-17 06:27:25.696207 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 06:27:25.696218 | orchestrator | Friday 17 April 2026 06:27:16 +0000 (0:00:01.660) 0:32:18.773 ********** 2026-04-17 06:27:25.696228 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:25.696239 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:25.696250 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:25.696261 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:25.696271 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:25.696282 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.696300 | orchestrator | 2026-04-17 06:27:25.696311 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 06:27:25.696322 | orchestrator | Friday 17 April 2026 06:27:17 +0000 (0:00:01.118) 0:32:19.891 ********** 2026-04-17 06:27:25.696332 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:25.696343 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:25.696354 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:25.696365 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:25.696376 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:25.696386 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.696397 | orchestrator | 2026-04-17 06:27:25.696408 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 06:27:25.696419 | orchestrator | Friday 17 April 2026 06:27:18 +0000 (0:00:01.488) 0:32:21.380 ********** 2026-04-17 06:27:25.696429 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:25.696440 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:25.696451 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:25.696461 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:25.696472 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:25.696483 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:25.696494 | orchestrator | 2026-04-17 06:27:25.696505 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 06:27:25.696515 | orchestrator | Friday 17 April 2026 06:27:19 +0000 (0:00:00.803) 0:32:22.183 ********** 2026-04-17 06:27:25.696526 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:25.696537 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:25.696548 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:25.696558 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:25.696569 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:25.696579 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:25.696590 | orchestrator | 2026-04-17 06:27:25.696601 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 06:27:25.696612 | orchestrator | Friday 17 April 2026 06:27:20 +0000 (0:00:01.053) 0:32:23.237 ********** 2026-04-17 06:27:25.696623 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:25.696634 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:25.696644 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:25.696655 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:25.696665 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:25.696676 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:25.696687 | orchestrator | 2026-04-17 06:27:25.696698 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 06:27:25.696708 | orchestrator | Friday 17 April 2026 06:27:21 +0000 (0:00:00.763) 0:32:24.000 ********** 2026-04-17 06:27:25.696719 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:25.696730 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:25.696740 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:25.696751 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:25.696762 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:25.696772 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.696783 | orchestrator | 2026-04-17 06:27:25.696794 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 06:27:25.696805 | orchestrator | Friday 17 April 2026 06:27:22 +0000 (0:00:01.486) 0:32:25.487 ********** 2026-04-17 06:27:25.696815 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:25.696826 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:25.696837 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:25.696847 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:25.696858 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:25.696868 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:25.696879 | orchestrator | 2026-04-17 06:27:25.696890 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 06:27:25.696901 | orchestrator | Friday 17 April 2026 06:27:23 +0000 (0:00:01.107) 0:32:26.594 ********** 2026-04-17 06:27:25.696911 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:25.696928 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:25.696939 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:25.696950 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:25.696960 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:25.696971 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:25.696982 | orchestrator | 2026-04-17 06:27:25.696992 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 06:27:25.697003 | orchestrator | Friday 17 April 2026 06:27:24 +0000 (0:00:00.675) 0:32:27.269 ********** 2026-04-17 06:27:25.697043 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:25.697055 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:25.697066 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:25.697077 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:25.697087 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:25.697098 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:25.697109 | orchestrator | 2026-04-17 06:27:25.697126 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 06:27:59.655640 | orchestrator | Friday 17 April 2026 06:27:25 +0000 (0:00:01.159) 0:32:28.428 ********** 2026-04-17 06:27:59.655768 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.655785 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.655797 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.655809 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:59.655820 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:59.655831 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:59.655842 | orchestrator | 2026-04-17 06:27:59.655854 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 06:27:59.655865 | orchestrator | Friday 17 April 2026 06:27:26 +0000 (0:00:00.684) 0:32:29.113 ********** 2026-04-17 06:27:59.655876 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.655902 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.655914 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.655925 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:59.655936 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:59.655947 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:59.655957 | orchestrator | 2026-04-17 06:27:59.655968 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 06:27:59.655979 | orchestrator | Friday 17 April 2026 06:27:27 +0000 (0:00:01.039) 0:32:30.153 ********** 2026-04-17 06:27:59.655990 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.656001 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.656011 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.656022 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:59.656203 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:59.656222 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:59.656235 | orchestrator | 2026-04-17 06:27:59.656248 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 06:27:59.656261 | orchestrator | Friday 17 April 2026 06:27:28 +0000 (0:00:00.804) 0:32:30.958 ********** 2026-04-17 06:27:59.656274 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.656287 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.656298 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.656309 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:59.656320 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:59.656330 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:59.656341 | orchestrator | 2026-04-17 06:27:59.656352 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 06:27:59.656363 | orchestrator | Friday 17 April 2026 06:27:29 +0000 (0:00:01.090) 0:32:32.048 ********** 2026-04-17 06:27:59.656373 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.656384 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.656395 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.656406 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:59.656417 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:59.656452 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:59.656464 | orchestrator | 2026-04-17 06:27:59.656475 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 06:27:59.656485 | orchestrator | Friday 17 April 2026 06:27:30 +0000 (0:00:00.706) 0:32:32.754 ********** 2026-04-17 06:27:59.656496 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.656507 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:59.656518 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:59.656528 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:59.656539 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:59.656549 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:59.656560 | orchestrator | 2026-04-17 06:27:59.656571 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 06:27:59.656582 | orchestrator | Friday 17 April 2026 06:27:30 +0000 (0:00:00.984) 0:32:33.739 ********** 2026-04-17 06:27:59.656592 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.656603 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:59.656614 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:59.656624 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:59.656635 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:59.656645 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:59.656656 | orchestrator | 2026-04-17 06:27:59.656667 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 06:27:59.656677 | orchestrator | Friday 17 April 2026 06:27:31 +0000 (0:00:00.694) 0:32:34.433 ********** 2026-04-17 06:27:59.656688 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.656698 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:59.656710 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:59.656720 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:59.656731 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:59.656741 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:59.656752 | orchestrator | 2026-04-17 06:27:59.656763 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-17 06:27:59.656773 | orchestrator | Friday 17 April 2026 06:27:33 +0000 (0:00:01.519) 0:32:35.953 ********** 2026-04-17 06:27:59.656784 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.656795 | orchestrator | 2026-04-17 06:27:59.656805 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-17 06:27:59.656816 | orchestrator | Friday 17 April 2026 06:27:35 +0000 (0:00:02.196) 0:32:38.149 ********** 2026-04-17 06:27:59.656826 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.656837 | orchestrator | 2026-04-17 06:27:59.656848 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-17 06:27:59.656858 | orchestrator | Friday 17 April 2026 06:27:37 +0000 (0:00:02.094) 0:32:40.243 ********** 2026-04-17 06:27:59.656869 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.656880 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:59.656890 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:59.656901 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:59.656911 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:59.656922 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:59.656932 | orchestrator | 2026-04-17 06:27:59.656943 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-17 06:27:59.656954 | orchestrator | Friday 17 April 2026 06:27:39 +0000 (0:00:01.838) 0:32:42.081 ********** 2026-04-17 06:27:59.656965 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.656975 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:59.656985 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:59.656996 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:59.657006 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:59.657017 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:59.657027 | orchestrator | 2026-04-17 06:27:59.657058 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-17 06:27:59.657089 | orchestrator | Friday 17 April 2026 06:27:40 +0000 (0:00:01.113) 0:32:43.195 ********** 2026-04-17 06:27:59.657102 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:27:59.657123 | orchestrator | 2026-04-17 06:27:59.657134 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-17 06:27:59.657145 | orchestrator | Friday 17 April 2026 06:27:42 +0000 (0:00:02.040) 0:32:45.236 ********** 2026-04-17 06:27:59.657156 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.657166 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:59.657177 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:59.657195 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:27:59.657206 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:27:59.657217 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:27:59.657227 | orchestrator | 2026-04-17 06:27:59.657238 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-17 06:27:59.657249 | orchestrator | Friday 17 April 2026 06:27:44 +0000 (0:00:01.596) 0:32:46.833 ********** 2026-04-17 06:27:59.657260 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:27:59.657270 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:27:59.657281 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:27:59.657292 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:27:59.657302 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:27:59.657313 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:27:59.657323 | orchestrator | 2026-04-17 06:27:59.657334 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-04-17 06:27:59.657345 | orchestrator | 2026-04-17 06:27:59.657355 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:27:59.657366 | orchestrator | Friday 17 April 2026 06:27:47 +0000 (0:00:03.873) 0:32:50.707 ********** 2026-04-17 06:27:59.657377 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.657387 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:59.657398 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:59.657408 | orchestrator | 2026-04-17 06:27:59.657419 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:27:59.657429 | orchestrator | Friday 17 April 2026 06:27:49 +0000 (0:00:01.239) 0:32:51.946 ********** 2026-04-17 06:27:59.657440 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.657450 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:27:59.657461 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:27:59.657472 | orchestrator | 2026-04-17 06:27:59.657483 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-17 06:27:59.657494 | orchestrator | Friday 17 April 2026 06:27:49 +0000 (0:00:00.591) 0:32:52.538 ********** 2026-04-17 06:27:59.657504 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:27:59.657515 | orchestrator | 2026-04-17 06:27:59.657526 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-17 06:27:59.657537 | orchestrator | Friday 17 April 2026 06:27:51 +0000 (0:00:01.333) 0:32:53.872 ********** 2026-04-17 06:27:59.657547 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.657558 | orchestrator | 2026-04-17 06:27:59.657568 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-04-17 06:27:59.657579 | orchestrator | 2026-04-17 06:27:59.657590 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-04-17 06:27:59.657600 | orchestrator | Friday 17 April 2026 06:27:52 +0000 (0:00:01.580) 0:32:55.452 ********** 2026-04-17 06:27:59.657611 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.657621 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.657632 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.657642 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:59.657653 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:59.657664 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:59.657674 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:27:59.657685 | orchestrator | 2026-04-17 06:27:59.657696 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:27:59.657706 | orchestrator | Friday 17 April 2026 06:27:53 +0000 (0:00:00.765) 0:32:56.218 ********** 2026-04-17 06:27:59.657724 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.657734 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.657745 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.657755 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:59.657766 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:59.657776 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:59.657787 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:27:59.657797 | orchestrator | 2026-04-17 06:27:59.657808 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-17 06:27:59.657819 | orchestrator | Friday 17 April 2026 06:27:55 +0000 (0:00:02.144) 0:32:58.362 ********** 2026-04-17 06:27:59.657829 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.657840 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.657850 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.657861 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:59.657871 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:59.657881 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:59.657892 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:27:59.657902 | orchestrator | 2026-04-17 06:27:59.657913 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-17 06:27:59.657924 | orchestrator | Friday 17 April 2026 06:27:57 +0000 (0:00:01.790) 0:33:00.152 ********** 2026-04-17 06:27:59.657934 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.657945 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.657955 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.657965 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:27:59.657976 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:27:59.657986 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:27:59.657997 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:27:59.658007 | orchestrator | 2026-04-17 06:27:59.658099 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-04-17 06:27:59.658113 | orchestrator | Friday 17 April 2026 06:27:59 +0000 (0:00:01.755) 0:33:01.908 ********** 2026-04-17 06:27:59.658123 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:27:59.658134 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:27:59.658145 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:27:59.658163 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:28:19.463155 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:28:19.463264 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:28:19.463278 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463289 | orchestrator | 2026-04-17 06:28:19.463300 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-04-17 06:28:19.463311 | orchestrator | 2026-04-17 06:28:19.463321 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-04-17 06:28:19.463331 | orchestrator | Friday 17 April 2026 06:28:01 +0000 (0:00:02.328) 0:33:04.236 ********** 2026-04-17 06:28:19.463341 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-04-17 06:28:19.463351 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-04-17 06:28:19.463378 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-04-17 06:28:19.463389 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463398 | orchestrator | 2026-04-17 06:28:19.463408 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-17 06:28:19.463418 | orchestrator | Friday 17 April 2026 06:28:01 +0000 (0:00:00.191) 0:33:04.427 ********** 2026-04-17 06:28:19.463427 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463437 | orchestrator | 2026-04-17 06:28:19.463446 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-17 06:28:19.463456 | orchestrator | Friday 17 April 2026 06:28:01 +0000 (0:00:00.184) 0:33:04.612 ********** 2026-04-17 06:28:19.463466 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463497 | orchestrator | 2026-04-17 06:28:19.463507 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-17 06:28:19.463517 | orchestrator | Friday 17 April 2026 06:28:02 +0000 (0:00:00.191) 0:33:04.804 ********** 2026-04-17 06:28:19.463526 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463536 | orchestrator | 2026-04-17 06:28:19.463545 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-17 06:28:19.463555 | orchestrator | Friday 17 April 2026 06:28:02 +0000 (0:00:00.168) 0:33:04.972 ********** 2026-04-17 06:28:19.463564 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463573 | orchestrator | 2026-04-17 06:28:19.463583 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-04-17 06:28:19.463592 | orchestrator | Friday 17 April 2026 06:28:02 +0000 (0:00:00.632) 0:33:05.604 ********** 2026-04-17 06:28:19.463602 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-04-17 06:28:19.463612 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-04-17 06:28:19.463621 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463633 | orchestrator | 2026-04-17 06:28:19.463644 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-04-17 06:28:19.463655 | orchestrator | Friday 17 April 2026 06:28:03 +0000 (0:00:00.185) 0:33:05.790 ********** 2026-04-17 06:28:19.463667 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463677 | orchestrator | 2026-04-17 06:28:19.463688 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-04-17 06:28:19.463699 | orchestrator | Friday 17 April 2026 06:28:03 +0000 (0:00:00.197) 0:33:05.987 ********** 2026-04-17 06:28:19.463710 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463721 | orchestrator | 2026-04-17 06:28:19.463731 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-04-17 06:28:19.463742 | orchestrator | Friday 17 April 2026 06:28:03 +0000 (0:00:00.176) 0:33:06.164 ********** 2026-04-17 06:28:19.463753 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463764 | orchestrator | 2026-04-17 06:28:19.463775 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-04-17 06:28:19.463786 | orchestrator | Friday 17 April 2026 06:28:03 +0000 (0:00:00.159) 0:33:06.324 ********** 2026-04-17 06:28:19.463798 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-04-17 06:28:19.463809 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-04-17 06:28:19.463819 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463830 | orchestrator | 2026-04-17 06:28:19.463841 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-04-17 06:28:19.463852 | orchestrator | Friday 17 April 2026 06:28:03 +0000 (0:00:00.181) 0:33:06.505 ********** 2026-04-17 06:28:19.463863 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463874 | orchestrator | 2026-04-17 06:28:19.463884 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-04-17 06:28:19.463896 | orchestrator | Friday 17 April 2026 06:28:03 +0000 (0:00:00.177) 0:33:06.682 ********** 2026-04-17 06:28:19.463906 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463917 | orchestrator | 2026-04-17 06:28:19.463928 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-04-17 06:28:19.463938 | orchestrator | Friday 17 April 2026 06:28:04 +0000 (0:00:00.267) 0:33:06.950 ********** 2026-04-17 06:28:19.463949 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.463960 | orchestrator | 2026-04-17 06:28:19.463971 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-04-17 06:28:19.463982 | orchestrator | Friday 17 April 2026 06:28:04 +0000 (0:00:00.667) 0:33:07.617 ********** 2026-04-17 06:28:19.463992 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:19.464001 | orchestrator | 2026-04-17 06:28:19.464010 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-04-17 06:28:19.464020 | orchestrator | 2026-04-17 06:28:19.464029 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 06:28:19.464073 | orchestrator | Friday 17 April 2026 06:28:05 +0000 (0:00:00.900) 0:33:08.517 ********** 2026-04-17 06:28:19.464084 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:28:19.464094 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:28:19.464103 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:28:19.464113 | orchestrator | 2026-04-17 06:28:19.464123 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-17 06:28:19.464132 | orchestrator | Friday 17 April 2026 06:28:06 +0000 (0:00:00.573) 0:33:09.090 ********** 2026-04-17 06:28:19.464142 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:28:19.464152 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:28:19.464178 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:28:19.464188 | orchestrator | 2026-04-17 06:28:19.464198 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-17 06:28:19.464207 | orchestrator | Friday 17 April 2026 06:28:07 +0000 (0:00:00.723) 0:33:09.814 ********** 2026-04-17 06:28:19.464217 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:28:19.464226 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:28:19.464235 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:28:19.464245 | orchestrator | 2026-04-17 06:28:19.464254 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-17 06:28:19.464263 | orchestrator | Friday 17 April 2026 06:28:07 +0000 (0:00:00.327) 0:33:10.142 ********** 2026-04-17 06:28:19.464273 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:28:19.464288 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:28:19.464298 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:28:19.464307 | orchestrator | 2026-04-17 06:28:19.464316 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-17 06:28:19.464326 | orchestrator | Friday 17 April 2026 06:28:07 +0000 (0:00:00.329) 0:33:10.472 ********** 2026-04-17 06:28:19.464336 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:28:19.464345 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:28:19.464354 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:28:19.464364 | orchestrator | 2026-04-17 06:28:19.464373 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-04-17 06:28:19.464383 | orchestrator | Friday 17 April 2026 06:28:08 +0000 (0:00:00.939) 0:33:11.411 ********** 2026-04-17 06:28:19.464392 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:28:19.464402 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:28:19.464411 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:28:19.464421 | orchestrator | 2026-04-17 06:28:19.464430 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-04-17 06:28:19.464440 | orchestrator | Friday 17 April 2026 06:28:09 +0000 (0:00:00.340) 0:33:11.751 ********** 2026-04-17 06:28:19.464449 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:28:19.464458 | orchestrator | 2026-04-17 06:28:19.464468 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-04-17 06:28:19.464477 | orchestrator | 2026-04-17 06:28:19.464487 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 06:28:19.464496 | orchestrator | Friday 17 April 2026 06:28:09 +0000 (0:00:00.782) 0:33:12.534 ********** 2026-04-17 06:28:19.464506 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:28:19.464515 | orchestrator | 2026-04-17 06:28:19.464525 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 06:28:19.464534 | orchestrator | Friday 17 April 2026 06:28:10 +0000 (0:00:00.465) 0:33:13.000 ********** 2026-04-17 06:28:19.464544 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:28:19.464553 | orchestrator | 2026-04-17 06:28:19.464563 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-04-17 06:28:19.464572 | orchestrator | Friday 17 April 2026 06:28:10 +0000 (0:00:00.255) 0:33:13.255 ********** 2026-04-17 06:28:19.464582 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:28:19.464591 | orchestrator | 2026-04-17 06:28:19.464601 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-04-17 06:28:19.464617 | orchestrator | Friday 17 April 2026 06:28:10 +0000 (0:00:00.183) 0:33:13.438 ********** 2026-04-17 06:28:19.464626 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:28:19.464636 | orchestrator | 2026-04-17 06:28:19.464645 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-04-17 06:28:19.464654 | orchestrator | Friday 17 April 2026 06:28:13 +0000 (0:00:02.323) 0:33:15.762 ********** 2026-04-17 06:28:19.464664 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:28:19.464674 | orchestrator | 2026-04-17 06:28:19.464683 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-04-17 06:28:19.464692 | orchestrator | Friday 17 April 2026 06:28:15 +0000 (0:00:02.225) 0:33:17.988 ********** 2026-04-17 06:28:19.464702 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:28:19.464711 | orchestrator | 2026-04-17 06:28:19.464721 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-04-17 06:28:19.464730 | orchestrator | 2026-04-17 06:28:19.464739 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-04-17 06:28:19.464749 | orchestrator | Friday 17 April 2026 06:28:16 +0000 (0:00:01.152) 0:33:19.140 ********** 2026-04-17 06:28:19.464758 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:28:19.464768 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:28:19.464777 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:28:19.464787 | orchestrator | 2026-04-17 06:28:19.464796 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-04-17 06:28:19.464806 | orchestrator | Friday 17 April 2026 06:28:16 +0000 (0:00:00.528) 0:33:19.669 ********** 2026-04-17 06:28:19.464815 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:28:19.464825 | orchestrator | 2026-04-17 06:28:19.464834 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-04-17 06:28:19.464843 | orchestrator | Friday 17 April 2026 06:28:18 +0000 (0:00:01.305) 0:33:20.975 ********** 2026-04-17 06:28:19.464853 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:28:19.464862 | orchestrator | 2026-04-17 06:28:19.464871 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 06:28:19.464882 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 06:28:19.464893 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-04-17 06:28:19.464904 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-04-17 06:28:19.464913 | orchestrator | testbed-node-1 : ok=191  changed=15  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-04-17 06:28:19.464929 | orchestrator | testbed-node-2 : ok=196  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-04-17 06:28:21.506406 | orchestrator | testbed-node-3 : ok=311  changed=22  unreachable=0 failed=0 skipped=348  rescued=0 ignored=0 2026-04-17 06:28:21.506489 | orchestrator | testbed-node-4 : ok=308  changed=17  unreachable=0 failed=0 skipped=359  rescued=0 ignored=0 2026-04-17 06:28:21.506513 | orchestrator | testbed-node-5 : ok=308  changed=18  unreachable=0 failed=0 skipped=358  rescued=0 ignored=0 2026-04-17 06:28:21.506521 | orchestrator | 2026-04-17 06:28:21.506528 | orchestrator | 2026-04-17 06:28:21.506534 | orchestrator | 2026-04-17 06:28:21.506540 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 06:28:21.506548 | orchestrator | Friday 17 April 2026 06:28:20 +0000 (0:00:02.396) 0:33:23.371 ********** 2026-04-17 06:28:21.506554 | orchestrator | =============================================================================== 2026-04-17 06:28:21.506578 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 71.61s 2026-04-17 06:28:21.506585 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 69.74s 2026-04-17 06:28:21.506591 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 34.75s 2026-04-17 06:28:21.506597 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.10s 2026-04-17 06:28:21.506603 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.25s 2026-04-17 06:28:21.506609 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.19s 2026-04-17 06:28:21.506615 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.14s 2026-04-17 06:28:21.506621 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 26.92s 2026-04-17 06:28:21.506627 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 25.73s 2026-04-17 06:28:21.506633 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.98s 2026-04-17 06:28:21.506639 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 21.10s 2026-04-17 06:28:21.506645 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 14.43s 2026-04-17 06:28:21.506651 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.79s 2026-04-17 06:28:21.506657 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 11.52s 2026-04-17 06:28:21.506663 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 11.13s 2026-04-17 06:28:21.506669 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 11.12s 2026-04-17 06:28:21.506675 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.27s 2026-04-17 06:28:21.506681 | orchestrator | Stop ceph osd ----------------------------------------------------------- 9.82s 2026-04-17 06:28:21.506687 | orchestrator | Set cluster configs ----------------------------------------------------- 9.61s 2026-04-17 06:28:21.506693 | orchestrator | Restart active mds ------------------------------------------------------ 8.69s 2026-04-17 06:28:21.713618 | orchestrator | + osism apply cephclient 2026-04-17 06:28:23.123364 | orchestrator | 2026-04-17 06:28:23 | INFO  | Prepare task for execution of cephclient. 2026-04-17 06:28:23.198122 | orchestrator | 2026-04-17 06:28:23 | INFO  | Task 20d66003-2f9b-4d78-8761-cad61e0dbb51 (cephclient) was prepared for execution. 2026-04-17 06:28:23.198222 | orchestrator | 2026-04-17 06:28:23 | INFO  | It takes a moment until task 20d66003-2f9b-4d78-8761-cad61e0dbb51 (cephclient) has been started and output is visible here. 2026-04-17 06:28:52.366862 | orchestrator | 2026-04-17 06:28:52.366996 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-17 06:28:52.367015 | orchestrator | 2026-04-17 06:28:52.367089 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-17 06:28:52.367104 | orchestrator | Friday 17 April 2026 06:28:29 +0000 (0:00:01.897) 0:00:01.897 ********** 2026-04-17 06:28:52.367117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-17 06:28:52.367130 | orchestrator | 2026-04-17 06:28:52.367141 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-17 06:28:52.367152 | orchestrator | Friday 17 April 2026 06:28:31 +0000 (0:00:02.187) 0:00:04.084 ********** 2026-04-17 06:28:52.367163 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-17 06:28:52.367175 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-17 06:28:52.367187 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-17 06:28:52.367198 | orchestrator | 2026-04-17 06:28:52.367208 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-17 06:28:52.367219 | orchestrator | Friday 17 April 2026 06:28:34 +0000 (0:00:02.775) 0:00:06.859 ********** 2026-04-17 06:28:52.367257 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-17 06:28:52.367269 | orchestrator | 2026-04-17 06:28:52.367280 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-17 06:28:52.367290 | orchestrator | Friday 17 April 2026 06:28:36 +0000 (0:00:02.031) 0:00:08.891 ********** 2026-04-17 06:28:52.367301 | orchestrator | ok: [testbed-manager] 2026-04-17 06:28:52.367312 | orchestrator | 2026-04-17 06:28:52.367323 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-17 06:28:52.367334 | orchestrator | Friday 17 April 2026 06:28:38 +0000 (0:00:01.889) 0:00:10.781 ********** 2026-04-17 06:28:52.367345 | orchestrator | ok: [testbed-manager] 2026-04-17 06:28:52.367356 | orchestrator | 2026-04-17 06:28:52.367366 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-17 06:28:52.367377 | orchestrator | Friday 17 April 2026 06:28:40 +0000 (0:00:01.916) 0:00:12.697 ********** 2026-04-17 06:28:52.367388 | orchestrator | ok: [testbed-manager] 2026-04-17 06:28:52.367398 | orchestrator | 2026-04-17 06:28:52.367409 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-17 06:28:52.367435 | orchestrator | Friday 17 April 2026 06:28:42 +0000 (0:00:02.383) 0:00:15.081 ********** 2026-04-17 06:28:52.367446 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-17 06:28:52.367457 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-04-17 06:28:52.367468 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-17 06:28:52.367479 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-17 06:28:52.367490 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-17 06:28:52.367500 | orchestrator | 2026-04-17 06:28:52.367511 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-17 06:28:52.367522 | orchestrator | Friday 17 April 2026 06:28:47 +0000 (0:00:05.302) 0:00:20.383 ********** 2026-04-17 06:28:52.367532 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-17 06:28:52.367543 | orchestrator | 2026-04-17 06:28:52.367554 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-17 06:28:52.367564 | orchestrator | Friday 17 April 2026 06:28:49 +0000 (0:00:01.546) 0:00:21.930 ********** 2026-04-17 06:28:52.367575 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:52.367586 | orchestrator | 2026-04-17 06:28:52.367596 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-17 06:28:52.367607 | orchestrator | Friday 17 April 2026 06:28:50 +0000 (0:00:01.204) 0:00:23.134 ********** 2026-04-17 06:28:52.367617 | orchestrator | skipping: [testbed-manager] 2026-04-17 06:28:52.367628 | orchestrator | 2026-04-17 06:28:52.367638 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 06:28:52.367649 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 06:28:52.367661 | orchestrator | 2026-04-17 06:28:52.367671 | orchestrator | 2026-04-17 06:28:52.367683 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 06:28:52.367694 | orchestrator | Friday 17 April 2026 06:28:51 +0000 (0:00:01.460) 0:00:24.595 ********** 2026-04-17 06:28:52.367705 | orchestrator | =============================================================================== 2026-04-17 06:28:52.367715 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 5.30s 2026-04-17 06:28:52.367726 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.78s 2026-04-17 06:28:52.367736 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.38s 2026-04-17 06:28:52.367747 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 2.19s 2026-04-17 06:28:52.367757 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.03s 2026-04-17 06:28:52.367768 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.92s 2026-04-17 06:28:52.367786 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.89s 2026-04-17 06:28:52.367797 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.55s 2026-04-17 06:28:52.367808 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.46s 2026-04-17 06:28:52.367818 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.20s 2026-04-17 06:28:52.559597 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-17 06:28:52.559678 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-04-17 06:28:52.572149 | orchestrator | + set -e 2026-04-17 06:28:52.572337 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 06:28:52.572354 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 06:28:52.572444 | orchestrator | ++ INTERACTIVE=false 2026-04-17 06:28:52.572462 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 06:28:52.572485 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 06:28:52.572512 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 06:28:52.572529 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 06:28:52.572548 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 06:28:52.572568 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 06:28:52.572586 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 06:28:52.572604 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 06:28:52.572624 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 06:28:52.572644 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 06:28:52.572663 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 06:28:52.572683 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 06:28:52.572704 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 06:28:52.572725 | orchestrator | ++ export ARA=false 2026-04-17 06:28:52.572745 | orchestrator | ++ ARA=false 2026-04-17 06:28:52.572762 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 06:28:52.572773 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 06:28:52.572784 | orchestrator | ++ export TEMPEST=false 2026-04-17 06:28:52.572794 | orchestrator | ++ TEMPEST=false 2026-04-17 06:28:52.572805 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 06:28:52.572816 | orchestrator | ++ IS_ZUUL=true 2026-04-17 06:28:52.572840 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 06:28:52.572852 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 06:28:52.572863 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 06:28:52.572873 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 06:28:52.572884 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 06:28:52.572897 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 06:28:52.572915 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 06:28:52.572945 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 06:28:52.572964 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 06:28:52.572982 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 06:28:52.572998 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-17 06:28:52.573016 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-17 06:28:52.573033 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 06:28:52.573381 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 06:28:52.576876 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-17 06:28:52.576924 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-17 06:28:52.576934 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-17 06:28:52.576944 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-04-17 06:29:02.325938 | orchestrator | 2026-04-17 06:29:02 | ERROR  | Unable to get ansible vault password 2026-04-17 06:29:02.326197 | orchestrator | 2026-04-17 06:29:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-17 06:29:02.326224 | orchestrator | 2026-04-17 06:29:02 | ERROR  | Dropping encrypted entries 2026-04-17 06:29:02.361229 | orchestrator | 2026-04-17 06:29:02 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-17 06:29:02.362208 | orchestrator | 2026-04-17 06:29:02 | INFO  | Kolla configuration check passed 2026-04-17 06:29:02.532124 | orchestrator | 2026-04-17 06:29:02 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-04-17 06:29:02.553849 | orchestrator | 2026-04-17 06:29:02 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-04-17 06:29:02.965712 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-17 06:29:09.337443 | orchestrator | 2026-04-17 06:29:09 | ERROR  | Unable to get ansible vault password 2026-04-17 06:29:09.337557 | orchestrator | 2026-04-17 06:29:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-17 06:29:09.337575 | orchestrator | 2026-04-17 06:29:09 | ERROR  | Dropping encrypted entries 2026-04-17 06:29:09.371800 | orchestrator | 2026-04-17 06:29:09 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-17 06:29:09.503654 | orchestrator | 2026-04-17 06:29:09 | INFO  | Found 207 classic queue(s) in vhost '/': 2026-04-17 06:29:09.503746 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-04-17 06:29:09.503761 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-04-17 06:29:09.503774 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-04-17 06:29:09.503786 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-04-17 06:29:09.503798 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - barbican.workers_fanout_20a058ccedf94e3eb50598d76ab757db (vhost: /, messages: 0) 2026-04-17 06:29:09.504682 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - barbican.workers_fanout_6e063d46835a4e11afca4900bcdaf99e (vhost: /, messages: 0) 2026-04-17 06:29:09.504721 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - barbican.workers_fanout_8ad5bbb91d084f8d91b232a35d94a57d (vhost: /, messages: 0) 2026-04-17 06:29:09.504884 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-04-17 06:29:09.505223 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central (vhost: /, messages: 0) 2026-04-17 06:29:09.505577 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.505600 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.505611 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.505622 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central_fanout_1d3f05b198cd48578310d7621ff0c8fe (vhost: /, messages: 0) 2026-04-17 06:29:09.505636 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central_fanout_993e28f14edb4e3aab4835bf990259c4 (vhost: /, messages: 0) 2026-04-17 06:29:09.506201 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central_fanout_b84558daeb244be688f86c8b3badd6af (vhost: /, messages: 0) 2026-04-17 06:29:09.506225 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central_fanout_dca8cef115594b468bf3a305f1a6295b (vhost: /, messages: 0) 2026-04-17 06:29:09.506236 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central_fanout_f972dd7ad31146debaefdfa79aae1b67 (vhost: /, messages: 0) 2026-04-17 06:29:09.506247 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - central_fanout_fd6194b9a6854c94bf97e77ab30a4164 (vhost: /, messages: 0) 2026-04-17 06:29:09.506258 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-04-17 06:29:09.506345 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.506366 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.506606 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.506944 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-backup_fanout_6043b4b3acd24dafa5db15f8fafefc7f (vhost: /, messages: 0) 2026-04-17 06:29:09.506967 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-backup_fanout_a98a4ce7ac204850a64068be3ca991a5 (vhost: /, messages: 0) 2026-04-17 06:29:09.506979 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-backup_fanout_c18873a05f81445fa4f8270899bb6e95 (vhost: /, messages: 0) 2026-04-17 06:29:09.507006 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-04-17 06:29:09.507311 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.507333 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.507344 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.507356 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-scheduler_fanout_90e5c8e8cfb1472fb18235759a680c10 (vhost: /, messages: 0) 2026-04-17 06:29:09.507579 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-scheduler_fanout_92580677618d46d79abdd4b6bd36573b (vhost: /, messages: 0) 2026-04-17 06:29:09.507599 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-scheduler_fanout_db24bc34abee41ffaeaaa6f12985316a (vhost: /, messages: 0) 2026-04-17 06:29:09.507610 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-04-17 06:29:09.507621 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-04-17 06:29:09.507633 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.507811 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_f1d527c7b39c40e785f52b4829cc74a8 (vhost: /, messages: 0) 2026-04-17 06:29:09.507831 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-04-17 06:29:09.507979 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.507997 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_32743cb9ff824c2b98e02178b2ae6edd (vhost: /, messages: 0) 2026-04-17 06:29:09.508294 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-04-17 06:29:09.508318 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.508329 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_aba44ff3406140d38e7b6e1931cd9d6f (vhost: /, messages: 0) 2026-04-17 06:29:09.508588 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume_fanout_c8d25dac39c144a88bce642dfea80221 (vhost: /, messages: 0) 2026-04-17 06:29:09.508607 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume_fanout_d8999b5380514162b7fe923d810176f6 (vhost: /, messages: 0) 2026-04-17 06:29:09.508619 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - cinder-volume_fanout_f519f942656044fcb979f7053cdaadbd (vhost: /, messages: 0) 2026-04-17 06:29:09.508635 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - compute (vhost: /, messages: 0) 2026-04-17 06:29:09.508826 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-04-17 06:29:09.509198 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-04-17 06:29:09.509390 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-04-17 06:29:09.509412 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - compute_fanout_272c94e2eaf7410b8e3afacc5782bdd4 (vhost: /, messages: 0) 2026-04-17 06:29:09.509431 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - compute_fanout_a7a2f90a7fb2479fbab09cb743903103 (vhost: /, messages: 0) 2026-04-17 06:29:09.509449 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - compute_fanout_b025c3f235574d5a969b915221b26794 (vhost: /, messages: 0) 2026-04-17 06:29:09.509466 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor (vhost: /, messages: 0) 2026-04-17 06:29:09.510585 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.510616 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.510627 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.510637 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor_fanout_13aa9d0211994afb8cb616161f36be9f (vhost: /, messages: 0) 2026-04-17 06:29:09.510659 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor_fanout_3e01c97e1394440b81fb455c4973f414 (vhost: /, messages: 0) 2026-04-17 06:29:09.510670 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor_fanout_420b4182e3694f8f8afd807bbfd3417b (vhost: /, messages: 0) 2026-04-17 06:29:09.510681 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor_fanout_75cc114115784c099187abf4934bc96e (vhost: /, messages: 0) 2026-04-17 06:29:09.510691 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor_fanout_9e2883220267483fa05a9d5749865bec (vhost: /, messages: 0) 2026-04-17 06:29:09.510702 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - conductor_fanout_b34db215554346218b738476023ff7e8 (vhost: /, messages: 0) 2026-04-17 06:29:09.510713 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - event.sample (vhost: /, messages: 4) 2026-04-17 06:29:09.510723 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-17 06:29:09.510734 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor.aync76lm2t54 (vhost: /, messages: 0) 2026-04-17 06:29:09.510745 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor.dzwct34rcrrw (vhost: /, messages: 0) 2026-04-17 06:29:09.510821 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor.s3ymcvio2asi (vhost: /, messages: 0) 2026-04-17 06:29:09.510834 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_11601765e7c442fd998593624dd5766b (vhost: /, messages: 0) 2026-04-17 06:29:09.511002 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_28d9c33e2e444996b3de7e9169a4ec20 (vhost: /, messages: 0) 2026-04-17 06:29:09.511105 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_333db82f226d4c8a9b2d64d5a6fefd7e (vhost: /, messages: 0) 2026-04-17 06:29:09.511119 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_55ff3626d6574cd4852f07888b0404ea (vhost: /, messages: 0) 2026-04-17 06:29:09.511407 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_884a2875b69e49c1b7c59df2bc594c6e (vhost: /, messages: 0) 2026-04-17 06:29:09.511429 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_90e197aecc46435a8911ea489b96df7c (vhost: /, messages: 0) 2026-04-17 06:29:09.512841 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_b4258b3a1e9944fea5e6e830a4ad9242 (vhost: /, messages: 0) 2026-04-17 06:29:09.513131 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_c75d781f355d4e3d967594d8b67236e4 (vhost: /, messages: 0) 2026-04-17 06:29:09.513154 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - magnum-conductor_fanout_e5e7b4f6d9b64be4b73b5d38b4f4e3cb (vhost: /, messages: 0) 2026-04-17 06:29:09.513166 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-04-17 06:29:09.513178 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.513189 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.513200 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.513210 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-data_fanout_1a85c9bc27754ae6b338357a5d9fba36 (vhost: /, messages: 0) 2026-04-17 06:29:09.513221 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-data_fanout_31565bcd2b9a46259945a7e746f84116 (vhost: /, messages: 0) 2026-04-17 06:29:09.513232 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-data_fanout_d83371e3acbd40b39708548d506bb4bd (vhost: /, messages: 0) 2026-04-17 06:29:09.513373 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-04-17 06:29:09.513386 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.513397 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.513408 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.513418 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-scheduler_fanout_41f8ff0f80d44f1b94249a6124a7d072 (vhost: /, messages: 0) 2026-04-17 06:29:09.513430 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-scheduler_fanout_e1cf8611a2b84615bd655a3388092054 (vhost: /, messages: 0) 2026-04-17 06:29:09.513453 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-scheduler_fanout_ec0a861a594144a49f71a502d98e9909 (vhost: /, messages: 0) 2026-04-17 06:29:09.513465 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-04-17 06:29:09.513535 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-04-17 06:29:09.513617 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-04-17 06:29:09.513684 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-04-17 06:29:09.513706 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-share_fanout_2812a2c7dca14caab7b2abbeda168aae (vhost: /, messages: 0) 2026-04-17 06:29:09.513717 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-share_fanout_283d9e86d6954bd7856fda8f0fd341f9 (vhost: /, messages: 0) 2026-04-17 06:29:09.513796 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - manila-share_fanout_d5aec83b3f244692ae60d87f1eacc951 (vhost: /, messages: 0) 2026-04-17 06:29:09.513813 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-04-17 06:29:09.514199 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-04-17 06:29:09.514444 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-04-17 06:29:09.514474 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-04-17 06:29:09.514486 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-04-17 06:29:09.514497 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-04-17 06:29:09.514882 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-04-17 06:29:09.514903 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-04-17 06:29:09.515483 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.515501 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.515511 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.515521 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - octavia_provisioning_v2_fanout_2c84754a8986429dbf846a810793409c (vhost: /, messages: 0) 2026-04-17 06:29:09.515532 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - octavia_provisioning_v2_fanout_f1db7e4cb93240e0a6f263dbeabfb65c (vhost: /, messages: 0) 2026-04-17 06:29:09.515541 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - octavia_provisioning_v2_fanout_fdd4677c33ad459ebe141f14e5bb8f62 (vhost: /, messages: 0) 2026-04-17 06:29:09.515551 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer (vhost: /, messages: 0) 2026-04-17 06:29:09.515561 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.515570 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.515855 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.515874 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer_fanout_17094020c5e345d882f1e31047fa2166 (vhost: /, messages: 0) 2026-04-17 06:29:09.515885 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer_fanout_24f3ae76481346be918965111f6f6c75 (vhost: /, messages: 0) 2026-04-17 06:29:09.515896 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer_fanout_41613344066444828f859ac217def4d8 (vhost: /, messages: 0) 2026-04-17 06:29:09.515912 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer_fanout_5d731c00521847df83a2e107755e0665 (vhost: /, messages: 0) 2026-04-17 06:29:09.515922 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer_fanout_6de7a46da95e4cf58eff9f521c447aee (vhost: /, messages: 0) 2026-04-17 06:29:09.516254 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - producer_fanout_b8b22cc72ded4377a9881e50b4e9ea60 (vhost: /, messages: 0) 2026-04-17 06:29:09.516272 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-04-17 06:29:09.516291 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.516301 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.516421 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.516501 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_3b7519a6a83045fc97e9861180224dda (vhost: /, messages: 0) 2026-04-17 06:29:09.516565 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_505b57ff479e4a68b703312a933c0549 (vhost: /, messages: 0) 2026-04-17 06:29:09.516654 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_5f9611026c4b470a9cfec69422fc0d5a (vhost: /, messages: 0) 2026-04-17 06:29:09.516983 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_8683a031ab5e43419f3f80e8c468b0e2 (vhost: /, messages: 0) 2026-04-17 06:29:09.517002 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_aa28c3ed1dc94b7c88c814cff978cc10 (vhost: /, messages: 0) 2026-04-17 06:29:09.517013 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_d3657a30d4304c30bca4e57515e8e1de (vhost: /, messages: 0) 2026-04-17 06:29:09.517023 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_e01b607d41b844a188969f33d64527fd (vhost: /, messages: 0) 2026-04-17 06:29:09.517291 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_e473359f7af24e53a21ce05eee3a59ab (vhost: /, messages: 0) 2026-04-17 06:29:09.517309 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-plugin_fanout_fecce9e07da0445c80c356dc3c0c10ba (vhost: /, messages: 0) 2026-04-17 06:29:09.517319 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-04-17 06:29:09.517328 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.517490 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.517506 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.517626 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_01755eb3644742db9761856c6d4fa79b (vhost: /, messages: 0) 2026-04-17 06:29:09.517708 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_0886ca097998406d982a3904ea22e157 (vhost: /, messages: 0) 2026-04-17 06:29:09.517721 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_19dd850628e34dcd8669fe62fdeb1d32 (vhost: /, messages: 0) 2026-04-17 06:29:09.517874 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_1a19b587edaa41f4b93230ca3fe43c36 (vhost: /, messages: 0) 2026-04-17 06:29:09.517889 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_2bc3f3f290a14545babcff26dfa2fe10 (vhost: /, messages: 0) 2026-04-17 06:29:09.518122 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_3295e17f9e6b4c02963fe12cbba61b8c (vhost: /, messages: 0) 2026-04-17 06:29:09.518143 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_39eb3eeb25844b2d84c0080e13c7ecf5 (vhost: /, messages: 0) 2026-04-17 06:29:09.518310 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_46c6dd24312c4ff4a60b9bb1de96ac00 (vhost: /, messages: 0) 2026-04-17 06:29:09.518430 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_6993610db2f44b2a8bb1c1da5f091d7d (vhost: /, messages: 0) 2026-04-17 06:29:09.518538 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_6a42031487394a15bc025e5d7f5d5749 (vhost: /, messages: 0) 2026-04-17 06:29:09.518720 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_856b9390128d4a3a8a96233059a56d52 (vhost: /, messages: 0) 2026-04-17 06:29:09.518833 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_8c62adc2beac4ba29cf011d5d445969e (vhost: /, messages: 0) 2026-04-17 06:29:09.518908 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_d648dafc36ad4003b228578b75e19fa3 (vhost: /, messages: 0) 2026-04-17 06:29:09.518969 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_e33362f343cb43c18ad723713adf23f7 (vhost: /, messages: 0) 2026-04-17 06:29:09.518997 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_f11f2dd650cc40fb9d7f2aefc687e0cc (vhost: /, messages: 0) 2026-04-17 06:29:09.519255 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_f3d03c907708482ebcb4661cdd1dda23 (vhost: /, messages: 0) 2026-04-17 06:29:09.519450 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_f528dc3462bf415c83ddf9b17c6d2556 (vhost: /, messages: 0) 2026-04-17 06:29:09.519468 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-reports-plugin_fanout_f5419c57c66840e9bcaed89ab51ac982 (vhost: /, messages: 0) 2026-04-17 06:29:09.519478 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-04-17 06:29:09.519488 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.519659 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.519673 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.519800 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_1ae1d083d0fb4d668511dae87bbc7bcb (vhost: /, messages: 0) 2026-04-17 06:29:09.519901 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_2fc761ec65f0466fb6bebf965f801935 (vhost: /, messages: 0) 2026-04-17 06:29:09.519914 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_46cdb5dd62c54f12b9983fdf572f8a6f (vhost: /, messages: 0) 2026-04-17 06:29:09.519978 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_5888135859b04e408b2d410cc006dffa (vhost: /, messages: 0) 2026-04-17 06:29:09.520223 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_6b96001e56e74c05b5f3b1b295abd87d (vhost: /, messages: 0) 2026-04-17 06:29:09.520237 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_7fd1c527d4944c0486d172596b262230 (vhost: /, messages: 0) 2026-04-17 06:29:09.520506 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_845e0852e88a4b2b87da869b72e6c359 (vhost: /, messages: 0) 2026-04-17 06:29:09.520520 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_d5fe3c3e498b4330a20aa0417ca66d9b (vhost: /, messages: 0) 2026-04-17 06:29:09.520528 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - q-server-resource-versions_fanout_ecc80885d2b7498da01360fa1a5a9f47 (vhost: /, messages: 0) 2026-04-17 06:29:09.520593 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_16a47e6e144e4e9eb082064723acff2d (vhost: /, messages: 0) 2026-04-17 06:29:09.520678 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_2480bc4be3124a83873b885e638e9188 (vhost: /, messages: 0) 2026-04-17 06:29:09.520788 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_2c17d20d06534f34bd816be7286ccdc2 (vhost: /, messages: 0) 2026-04-17 06:29:09.520876 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_328e15effeb94d7ea35238e7de877f95 (vhost: /, messages: 0) 2026-04-17 06:29:09.521158 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_330fd5dafd4a4487996385036a8a6aeb (vhost: /, messages: 0) 2026-04-17 06:29:09.521172 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_373db827f055481d9f9e822ae3a7efad (vhost: /, messages: 0) 2026-04-17 06:29:09.521347 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_4aaa9c1435f7477b84bc5d5e8918b195 (vhost: /, messages: 0) 2026-04-17 06:29:09.521360 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_4ba91385446b4b6f84b34f69d2891be2 (vhost: /, messages: 0) 2026-04-17 06:29:09.521378 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_554e3ea47d424e0b94ec43865b5b40ad (vhost: /, messages: 0) 2026-04-17 06:29:09.521445 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_6756d7581cab4b7f9073be9fa13ee6d1 (vhost: /, messages: 0) 2026-04-17 06:29:09.521531 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_6c20b09b10d64677a2cdcec5f39474eb (vhost: /, messages: 0) 2026-04-17 06:29:09.521597 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_7483d828da734af9ae08d26012450022 (vhost: /, messages: 0) 2026-04-17 06:29:09.521668 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_7c138f55d4a44ccdba3905e8ee1a42bc (vhost: /, messages: 0) 2026-04-17 06:29:09.521829 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_8e316462ec234151b33b9b7522554dc2 (vhost: /, messages: 1) 2026-04-17 06:29:09.521853 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_c616f416063e40ce9317290bd3fc464a (vhost: /, messages: 0) 2026-04-17 06:29:09.521985 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_e3ba3dff52564c91a928d0aca50b587f (vhost: /, messages: 0) 2026-04-17 06:29:09.522090 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_e9c4f1c2e4fc44e1865160226d7f6199 (vhost: /, messages: 0) 2026-04-17 06:29:09.522292 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - reply_f1982abd987d400886ef1fa888ea0982 (vhost: /, messages: 0) 2026-04-17 06:29:09.522410 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-04-17 06:29:09.522544 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.522598 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.522726 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.522813 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler_fanout_147ae16fec474360b46f1e1a0ddf0b80 (vhost: /, messages: 0) 2026-04-17 06:29:09.522918 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler_fanout_60a14b52f67743b783ffda6b648f6c3c (vhost: /, messages: 0) 2026-04-17 06:29:09.523018 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler_fanout_64baca2c47154e8c8f9ae29981c17550 (vhost: /, messages: 0) 2026-04-17 06:29:09.524927 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler_fanout_b22883b73ec440fab6d5e4fcb88e8258 (vhost: /, messages: 0) 2026-04-17 06:29:09.524945 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler_fanout_e1fe3c3cbc4240c2b611528309556b65 (vhost: /, messages: 0) 2026-04-17 06:29:09.524951 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - scheduler_fanout_edba00a2ff144dc6b966f61b6bae827f (vhost: /, messages: 0) 2026-04-17 06:29:09.524958 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker (vhost: /, messages: 0) 2026-04-17 06:29:09.524965 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-04-17 06:29:09.524972 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-04-17 06:29:09.524978 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-04-17 06:29:09.524985 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker_fanout_1382a663cb5249509b4ad3c892ad603d (vhost: /, messages: 0) 2026-04-17 06:29:09.524991 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker_fanout_29bd8282b07946fb8a2f2908cbedb782 (vhost: /, messages: 0) 2026-04-17 06:29:09.524998 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker_fanout_609708bc5dd440e2a6b56d99ae6a008f (vhost: /, messages: 0) 2026-04-17 06:29:09.525013 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker_fanout_a3218af8539a4693bbf1ed32fbb1a049 (vhost: /, messages: 0) 2026-04-17 06:29:09.525019 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker_fanout_a5dab00bcf4c4061815203224ef4f7a1 (vhost: /, messages: 0) 2026-04-17 06:29:09.525026 | orchestrator | 2026-04-17 06:29:09 | INFO  |  - worker_fanout_e33095a7518b4a86b127d642c475903e (vhost: /, messages: 0) 2026-04-17 06:29:09.864038 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-17 06:29:16.576250 | orchestrator | 2026-04-17 06:29:16 | ERROR  | Unable to get ansible vault password 2026-04-17 06:29:16.576358 | orchestrator | 2026-04-17 06:29:16 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-17 06:29:16.576375 | orchestrator | 2026-04-17 06:29:16 | ERROR  | Dropping encrypted entries 2026-04-17 06:29:16.610281 | orchestrator | 2026-04-17 06:29:16 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-17 06:29:16.640772 | orchestrator | 2026-04-17 06:29:16 | INFO  | Found 46 exchange(s) in vhost '/': 2026-04-17 06:29:16.640833 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - aodh (type: topic, transient) 2026-04-17 06:29:16.640926 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - barbican.workers_fanout (type: fanout, transient) 2026-04-17 06:29:16.640949 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - ceilometer (type: topic, transient) 2026-04-17 06:29:16.642828 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - central_fanout (type: fanout, transient) 2026-04-17 06:29:16.642851 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - cinder (type: topic, transient) 2026-04-17 06:29:16.642877 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - cinder-backup_fanout (type: fanout, transient) 2026-04-17 06:29:16.642888 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - cinder-scheduler_fanout (type: fanout, transient) 2026-04-17 06:29:16.642898 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout (type: fanout, transient) 2026-04-17 06:29:16.642979 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout (type: fanout, transient) 2026-04-17 06:29:16.642991 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout (type: fanout, transient) 2026-04-17 06:29:16.643001 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - cinder-volume_fanout (type: fanout, transient) 2026-04-17 06:29:16.643011 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - compute_fanout (type: fanout, transient) 2026-04-17 06:29:16.643026 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - conductor_fanout (type: fanout, transient) 2026-04-17 06:29:16.643037 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - designate (type: topic, transient) 2026-04-17 06:29:16.643181 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - dns (type: topic, transient) 2026-04-17 06:29:16.643370 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - glance (type: topic, transient) 2026-04-17 06:29:16.643561 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - heat (type: topic, transient) 2026-04-17 06:29:16.643653 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - ironic (type: topic, transient) 2026-04-17 06:29:16.644251 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - keystone (type: topic, transient) 2026-04-17 06:29:16.644336 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - l3_agent_fanout (type: fanout, transient) 2026-04-17 06:29:16.644378 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - magnum (type: topic, transient) 2026-04-17 06:29:16.644392 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - magnum-conductor_fanout (type: fanout, transient) 2026-04-17 06:29:16.644713 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - manila-data_fanout (type: fanout, transient) 2026-04-17 06:29:16.644738 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - manila-scheduler_fanout (type: fanout, transient) 2026-04-17 06:29:16.645039 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - manila-share_fanout (type: fanout, transient) 2026-04-17 06:29:16.645061 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - neutron (type: topic, transient) 2026-04-17 06:29:16.645094 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - neutron-vo-Network-1.1_fanout (type: fanout, transient) 2026-04-17 06:29:16.645106 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - neutron-vo-Port-1.10_fanout (type: fanout, transient) 2026-04-17 06:29:16.645459 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - neutron-vo-SecurityGroup-1.6_fanout (type: fanout, transient) 2026-04-17 06:29:16.645552 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - neutron-vo-SecurityGroupRule-1.3_fanout (type: fanout, transient) 2026-04-17 06:29:16.645566 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - neutron-vo-Subnet-1.2_fanout (type: fanout, transient) 2026-04-17 06:29:16.645582 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - nova (type: topic, transient) 2026-04-17 06:29:16.646057 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - octavia (type: topic, transient) 2026-04-17 06:29:16.646236 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - octavia_provisioning_v2_fanout (type: fanout, transient) 2026-04-17 06:29:16.646253 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - openstack (type: topic, transient) 2026-04-17 06:29:16.646263 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - producer_fanout (type: fanout, transient) 2026-04-17 06:29:16.646282 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - q-agent-notifier-port-update_fanout (type: fanout, transient) 2026-04-17 06:29:16.646292 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - q-agent-notifier-security_group-update_fanout (type: fanout, transient) 2026-04-17 06:29:16.646465 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - q-plugin_fanout (type: fanout, transient) 2026-04-17 06:29:16.646483 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - q-reports-plugin_fanout (type: fanout, transient) 2026-04-17 06:29:16.646731 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - q-server-resource-versions_fanout (type: fanout, transient) 2026-04-17 06:29:16.646812 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - scheduler_fanout (type: fanout, transient) 2026-04-17 06:29:16.646837 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - swift (type: topic, transient) 2026-04-17 06:29:16.646852 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - trove (type: topic, transient) 2026-04-17 06:29:16.647129 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - worker_fanout (type: fanout, transient) 2026-04-17 06:29:16.647147 | orchestrator | 2026-04-17 06:29:16 | INFO  |  - zaqar (type: topic, transient) 2026-04-17 06:29:16.921246 | orchestrator | + osism apply -a upgrade keystone 2026-04-17 06:29:18.270841 | orchestrator | 2026-04-17 06:29:18 | INFO  | Prepare task for execution of keystone. 2026-04-17 06:29:18.334868 | orchestrator | 2026-04-17 06:29:18 | INFO  | Task 21f7661a-4238-45a7-a862-a4f4c2335a72 (keystone) was prepared for execution. 2026-04-17 06:29:18.334957 | orchestrator | 2026-04-17 06:29:18 | INFO  | It takes a moment until task 21f7661a-4238-45a7-a862-a4f4c2335a72 (keystone) has been started and output is visible here. 2026-04-17 06:29:29.222307 | orchestrator | 2026-04-17 06:29:29.222424 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 06:29:29.222444 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 06:29:29.222458 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 06:29:29.222480 | orchestrator | 2026-04-17 06:29:29.222491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 06:29:29.222502 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 06:29:29.222513 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 06:29:29.222535 | orchestrator | Friday 17 April 2026 06:29:23 +0000 (0:00:01.142) 0:00:01.142 ********** 2026-04-17 06:29:29.222546 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:29:29.222557 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:29:29.222568 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:29:29.222579 | orchestrator | 2026-04-17 06:29:29.222589 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 06:29:29.222600 | orchestrator | Friday 17 April 2026 06:29:24 +0000 (0:00:01.268) 0:00:02.410 ********** 2026-04-17 06:29:29.222611 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-17 06:29:29.222622 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-17 06:29:29.222633 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-17 06:29:29.222644 | orchestrator | 2026-04-17 06:29:29.222671 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-17 06:29:29.222683 | orchestrator | 2026-04-17 06:29:29.222694 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 06:29:29.222704 | orchestrator | Friday 17 April 2026 06:29:25 +0000 (0:00:01.166) 0:00:03.577 ********** 2026-04-17 06:29:29.222715 | orchestrator | included: /ansible/roles/keystone/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:29:29.222727 | orchestrator | 2026-04-17 06:29:29.222738 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-17 06:29:29.222749 | orchestrator | Friday 17 April 2026 06:29:26 +0000 (0:00:01.357) 0:00:04.935 ********** 2026-04-17 06:29:29.222765 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:29.222799 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:29.222851 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:29.222868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:29.222882 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:29.222895 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:29.222913 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:29.222933 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:29.222955 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:35.606820 | orchestrator | 2026-04-17 06:29:35.606924 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-17 06:29:35.606939 | orchestrator | Friday 17 April 2026 06:29:29 +0000 (0:00:02.412) 0:00:07.347 ********** 2026-04-17 06:29:35.606949 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:29:35.606961 | orchestrator | 2026-04-17 06:29:35.606971 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-17 06:29:35.606981 | orchestrator | Friday 17 April 2026 06:29:29 +0000 (0:00:00.197) 0:00:07.545 ********** 2026-04-17 06:29:35.606991 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:29:35.607001 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:29:35.607011 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:29:35.607021 | orchestrator | 2026-04-17 06:29:35.607031 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-17 06:29:35.607041 | orchestrator | Friday 17 April 2026 06:29:29 +0000 (0:00:00.446) 0:00:07.992 ********** 2026-04-17 06:29:35.607051 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 06:29:35.607061 | orchestrator | 2026-04-17 06:29:35.607070 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 06:29:35.607125 | orchestrator | Friday 17 April 2026 06:29:31 +0000 (0:00:01.316) 0:00:09.308 ********** 2026-04-17 06:29:35.607137 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:29:35.607148 | orchestrator | 2026-04-17 06:29:35.607158 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-17 06:29:35.607168 | orchestrator | Friday 17 April 2026 06:29:32 +0000 (0:00:01.173) 0:00:10.481 ********** 2026-04-17 06:29:35.607182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:35.607230 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:35.607262 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:35.607275 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:35.607286 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:35.607296 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:35.607313 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:35.607329 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:35.607340 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:35.607350 | orchestrator | 2026-04-17 06:29:35.607368 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-17 06:29:36.994602 | orchestrator | Friday 17 April 2026 06:29:35 +0000 (0:00:03.157) 0:00:13.638 ********** 2026-04-17 06:29:36.994711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:29:36.994733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:36.994771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:29:36.994785 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:29:36.994813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:29:36.994845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:36.994858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:29:36.994869 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:29:36.994880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:29:36.994900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:36.994917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:29:36.994929 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:29:36.994940 | orchestrator | 2026-04-17 06:29:36.994952 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-17 06:29:36.994963 | orchestrator | Friday 17 April 2026 06:29:36 +0000 (0:00:01.066) 0:00:14.705 ********** 2026-04-17 06:29:36.994984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:29:38.997063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:38.997208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:29:38.997224 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:29:38.997254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:29:38.997267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:38.997277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:29:38.997288 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:29:38.997317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:29:38.997336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:38.997346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:29:38.997356 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:29:38.997366 | orchestrator | 2026-04-17 06:29:38.997377 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-17 06:29:38.997388 | orchestrator | Friday 17 April 2026 06:29:37 +0000 (0:00:00.957) 0:00:15.662 ********** 2026-04-17 06:29:38.997404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:38.997423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:44.065572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:44.065684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:44.065719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:44.065732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:29:44.065763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:44.065793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:44.065831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:44.065844 | orchestrator | 2026-04-17 06:29:44.065857 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-17 06:29:44.065869 | orchestrator | Friday 17 April 2026 06:29:41 +0000 (0:00:03.416) 0:00:19.078 ********** 2026-04-17 06:29:44.065882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:44.065899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:44.065912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:44.065941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:50.234204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:29:50.234346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:50.234388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:50.234404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:50.234415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:29:50.234446 | orchestrator | 2026-04-17 06:29:50.234460 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-17 06:29:50.234472 | orchestrator | Friday 17 April 2026 06:29:46 +0000 (0:00:05.763) 0:00:24.841 ********** 2026-04-17 06:29:50.234484 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:29:50.234496 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:29:50.234507 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:29:50.234517 | orchestrator | 2026-04-17 06:29:50.234528 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-17 06:29:50.234539 | orchestrator | Friday 17 April 2026 06:29:48 +0000 (0:00:01.459) 0:00:26.301 ********** 2026-04-17 06:29:50.234550 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:29:50.234581 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:29:50.234593 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:29:50.234604 | orchestrator | 2026-04-17 06:29:50.234615 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-17 06:29:50.234626 | orchestrator | Friday 17 April 2026 06:29:48 +0000 (0:00:00.639) 0:00:26.940 ********** 2026-04-17 06:29:50.234636 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:29:50.234647 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:29:50.234658 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:29:50.234669 | orchestrator | 2026-04-17 06:29:50.234680 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-17 06:29:50.234691 | orchestrator | Friday 17 April 2026 06:29:49 +0000 (0:00:00.361) 0:00:27.301 ********** 2026-04-17 06:29:50.234701 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:29:50.234712 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:29:50.234722 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:29:50.234733 | orchestrator | 2026-04-17 06:29:50.234744 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-17 06:29:50.234754 | orchestrator | Friday 17 April 2026 06:29:49 +0000 (0:00:00.557) 0:00:27.859 ********** 2026-04-17 06:29:50.234772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:29:50.234786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:29:50.234806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:29:50.234817 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:29:50.234839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:30:07.542370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:30:07.542493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:30:07.542512 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:30:07.542544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:30:07.542582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:30:07.542595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:30:07.542606 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:30:07.542618 | orchestrator | 2026-04-17 06:30:07.542630 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 06:30:07.542642 | orchestrator | Friday 17 April 2026 06:29:50 +0000 (0:00:00.654) 0:00:28.514 ********** 2026-04-17 06:30:07.542653 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:30:07.542664 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:30:07.542674 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:30:07.542685 | orchestrator | 2026-04-17 06:30:07.542696 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-17 06:30:07.542724 | orchestrator | Friday 17 April 2026 06:29:50 +0000 (0:00:00.299) 0:00:28.814 ********** 2026-04-17 06:30:07.542737 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 06:30:07.542749 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 06:30:07.542759 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 06:30:07.542770 | orchestrator | 2026-04-17 06:30:07.542781 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-17 06:30:07.542792 | orchestrator | Friday 17 April 2026 06:29:52 +0000 (0:00:02.027) 0:00:30.841 ********** 2026-04-17 06:30:07.542803 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 06:30:07.542813 | orchestrator | 2026-04-17 06:30:07.542824 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-17 06:30:07.542835 | orchestrator | Friday 17 April 2026 06:29:53 +0000 (0:00:01.003) 0:00:31.844 ********** 2026-04-17 06:30:07.542846 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:30:07.542857 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:30:07.542867 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:30:07.542878 | orchestrator | 2026-04-17 06:30:07.542889 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-17 06:30:07.542901 | orchestrator | Friday 17 April 2026 06:29:54 +0000 (0:00:00.577) 0:00:32.422 ********** 2026-04-17 06:30:07.542922 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 06:30:07.542936 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 06:30:07.542948 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 06:30:07.542961 | orchestrator | 2026-04-17 06:30:07.542974 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-17 06:30:07.542986 | orchestrator | Friday 17 April 2026 06:29:55 +0000 (0:00:01.132) 0:00:33.554 ********** 2026-04-17 06:30:07.542999 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:30:07.543012 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:30:07.543025 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:30:07.543037 | orchestrator | 2026-04-17 06:30:07.543050 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-17 06:30:07.543063 | orchestrator | Friday 17 April 2026 06:29:55 +0000 (0:00:00.348) 0:00:33.903 ********** 2026-04-17 06:30:07.543081 | orchestrator | ok: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 06:30:07.543115 | orchestrator | ok: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 06:30:07.543128 | orchestrator | ok: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 06:30:07.543140 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 06:30:07.543153 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 06:30:07.543166 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 06:30:07.543179 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 06:30:07.543192 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 06:30:07.543204 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 06:30:07.543216 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 06:30:07.543229 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 06:30:07.543243 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 06:30:07.543256 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 06:30:07.543267 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 06:30:07.543278 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 06:30:07.543288 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 06:30:07.543300 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 06:30:07.543310 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 06:30:07.543321 | orchestrator | ok: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 06:30:07.543332 | orchestrator | ok: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 06:30:07.543342 | orchestrator | ok: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 06:30:07.543353 | orchestrator | 2026-04-17 06:30:07.543364 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-17 06:30:07.543375 | orchestrator | Friday 17 April 2026 06:30:05 +0000 (0:00:09.209) 0:00:43.113 ********** 2026-04-17 06:30:07.543386 | orchestrator | ok: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 06:30:07.543397 | orchestrator | ok: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 06:30:07.543407 | orchestrator | ok: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 06:30:07.543426 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 06:30:07.543444 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 06:30:12.248732 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 06:30:12.248837 | orchestrator | 2026-04-17 06:30:12.248862 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-17 06:30:12.248893 | orchestrator | Friday 17 April 2026 06:30:08 +0000 (0:00:03.020) 0:00:46.134 ********** 2026-04-17 06:30:12.248922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:30:12.248970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:30:12.248994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-17 06:30:12.249040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:30:12.249085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:30:12.249132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 06:30:12.249153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:30:12.249166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:30:12.249177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 06:30:12.249188 | orchestrator | 2026-04-17 06:30:12.249200 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-17 06:30:12.249211 | orchestrator | Friday 17 April 2026 06:30:11 +0000 (0:00:03.125) 0:00:49.260 ********** 2026-04-17 06:30:12.249230 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 06:30:12.249242 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:30:12.249253 | orchestrator | } 2026-04-17 06:30:12.249265 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 06:30:12.249278 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:30:12.249290 | orchestrator | } 2026-04-17 06:30:12.249302 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 06:30:12.249313 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:30:12.249325 | orchestrator | } 2026-04-17 06:30:12.249338 | orchestrator | 2026-04-17 06:30:12.249350 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 06:30:12.249363 | orchestrator | Friday 17 April 2026 06:30:11 +0000 (0:00:00.632) 0:00:49.892 ********** 2026-04-17 06:30:12.249389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:32:11.183741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:32:11.183850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:32:11.183862 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:32:11.183872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:32:11.183899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:32:11.183905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:32:11.183911 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:32:11.183931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-17 06:32:11.183941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 06:32:11.183948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 06:32:11.183959 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:32:11.183965 | orchestrator | 2026-04-17 06:32:11.183971 | orchestrator | TASK [keystone : Enable log_bin_trust_function_creators function] ************** 2026-04-17 06:32:11.183978 | orchestrator | Friday 17 April 2026 06:30:13 +0000 (0:00:01.324) 0:00:51.217 ********** 2026-04-17 06:32:11.183984 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:32:11.183990 | orchestrator | 2026-04-17 06:32:11.183996 | orchestrator | TASK [keystone : Init keystone database upgrade] ******************************* 2026-04-17 06:32:11.184001 | orchestrator | Friday 17 April 2026 06:30:15 +0000 (0:00:02.170) 0:00:53.387 ********** 2026-04-17 06:32:11.184007 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:32:11.184013 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:32:11.184019 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:32:11.184024 | orchestrator | 2026-04-17 06:32:11.184030 | orchestrator | TASK [keystone : Finish keystone database upgrade] ***************************** 2026-04-17 06:32:11.184036 | orchestrator | Friday 17 April 2026 06:30:15 +0000 (0:00:00.449) 0:00:53.836 ********** 2026-04-17 06:32:11.184041 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:32:11.184047 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:32:11.184053 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:32:11.184059 | orchestrator | 2026-04-17 06:32:11.184064 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 06:32:11.184070 | orchestrator | Friday 17 April 2026 06:30:16 +0000 (0:00:00.907) 0:00:54.743 ********** 2026-04-17 06:32:11.184076 | orchestrator | 2026-04-17 06:32:11.184081 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 06:32:11.184087 | orchestrator | Friday 17 April 2026 06:30:16 +0000 (0:00:00.078) 0:00:54.821 ********** 2026-04-17 06:32:11.184093 | orchestrator | 2026-04-17 06:32:11.184099 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 06:32:11.184104 | orchestrator | Friday 17 April 2026 06:30:16 +0000 (0:00:00.075) 0:00:54.896 ********** 2026-04-17 06:32:11.184110 | orchestrator | 2026-04-17 06:32:11.184116 | orchestrator | RUNNING HANDLER [keystone : Init keystone database upgrade] ******************** 2026-04-17 06:32:11.184122 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 06:32:11.184128 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 06:32:11.184139 | orchestrator | Friday 17 April 2026 06:30:16 +0000 (0:00:00.075) 0:00:54.972 ********** 2026-04-17 06:32:11.184187 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:32:11.184194 | orchestrator | 2026-04-17 06:32:11.184200 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-17 06:32:11.184206 | orchestrator | Friday 17 April 2026 06:31:19 +0000 (0:01:02.290) 0:01:57.262 ********** 2026-04-17 06:32:11.184211 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:32:11.184217 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:32:11.184223 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:32:11.184229 | orchestrator | 2026-04-17 06:32:11.184235 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-17 06:32:11.184245 | orchestrator | Friday 17 April 2026 06:32:11 +0000 (0:00:51.948) 0:02:49.210 ********** 2026-04-17 06:32:52.792140 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:32:52.792300 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:32:52.792316 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:32:52.792328 | orchestrator | 2026-04-17 06:32:52.792341 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-17 06:32:52.792353 | orchestrator | Friday 17 April 2026 06:32:23 +0000 (0:00:12.099) 0:03:01.310 ********** 2026-04-17 06:32:52.792364 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:32:52.792375 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:32:52.792385 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:32:52.792423 | orchestrator | 2026-04-17 06:32:52.792435 | orchestrator | RUNNING HANDLER [keystone : Finish keystone database upgrade] ****************** 2026-04-17 06:32:52.792446 | orchestrator | Friday 17 April 2026 06:32:37 +0000 (0:00:14.245) 0:03:15.555 ********** 2026-04-17 06:32:52.792457 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:32:52.792468 | orchestrator | 2026-04-17 06:32:52.792479 | orchestrator | TASK [keystone : Disable log_bin_trust_function_creators function] ************* 2026-04-17 06:32:52.792504 | orchestrator | Friday 17 April 2026 06:32:49 +0000 (0:00:11.936) 0:03:27.492 ********** 2026-04-17 06:32:52.792515 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:32:52.792526 | orchestrator | 2026-04-17 06:32:52.792537 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 06:32:52.792549 | orchestrator | testbed-node-0 : ok=25  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 06:32:52.792562 | orchestrator | testbed-node-1 : ok=19  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 06:32:52.792572 | orchestrator | testbed-node-2 : ok=21  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 06:32:52.792583 | orchestrator | 2026-04-17 06:32:52.792594 | orchestrator | 2026-04-17 06:32:52.792604 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 06:32:52.792615 | orchestrator | Friday 17 April 2026 06:32:52 +0000 (0:00:02.916) 0:03:30.408 ********** 2026-04-17 06:32:52.792626 | orchestrator | =============================================================================== 2026-04-17 06:32:52.792637 | orchestrator | keystone : Init keystone database upgrade ------------------------------ 62.29s 2026-04-17 06:32:52.792647 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 51.95s 2026-04-17 06:32:52.792658 | orchestrator | keystone : Restart keystone container ---------------------------------- 14.24s 2026-04-17 06:32:52.792669 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 12.10s 2026-04-17 06:32:52.792681 | orchestrator | keystone : Finish keystone database upgrade ---------------------------- 11.94s 2026-04-17 06:32:52.792694 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.21s 2026-04-17 06:32:52.792707 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.76s 2026-04-17 06:32:52.792719 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.42s 2026-04-17 06:32:52.792731 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.16s 2026-04-17 06:32:52.792744 | orchestrator | service-check-containers : keystone | Check containers ------------------ 3.13s 2026-04-17 06:32:52.792757 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.02s 2026-04-17 06:32:52.792769 | orchestrator | keystone : Disable log_bin_trust_function_creators function ------------- 2.92s 2026-04-17 06:32:52.792781 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.41s 2026-04-17 06:32:52.792793 | orchestrator | keystone : Enable log_bin_trust_function_creators function -------------- 2.17s 2026-04-17 06:32:52.792806 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.03s 2026-04-17 06:32:52.792818 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.46s 2026-04-17 06:32:52.792830 | orchestrator | keystone : include_tasks ------------------------------------------------ 1.36s 2026-04-17 06:32:52.792842 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.32s 2026-04-17 06:32:52.792854 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 1.32s 2026-04-17 06:32:52.792866 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.27s 2026-04-17 06:32:52.990149 | orchestrator | + osism apply -a upgrade placement 2026-04-17 06:32:54.261471 | orchestrator | 2026-04-17 06:32:54 | INFO  | Prepare task for execution of placement. 2026-04-17 06:32:54.328485 | orchestrator | 2026-04-17 06:32:54 | INFO  | Task e3c8c4fa-7084-45ee-92c0-447f826098fa (placement) was prepared for execution. 2026-04-17 06:32:54.328582 | orchestrator | 2026-04-17 06:32:54 | INFO  | It takes a moment until task e3c8c4fa-7084-45ee-92c0-447f826098fa (placement) has been started and output is visible here. 2026-04-17 06:33:34.132494 | orchestrator | 2026-04-17 06:33:34.132646 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 06:33:34.132664 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 06:33:34.132674 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 06:33:34.132688 | orchestrator | 2026-04-17 06:33:34.132695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 06:33:34.132701 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 06:33:34.132707 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 06:33:34.132720 | orchestrator | Friday 17 April 2026 06:32:58 +0000 (0:00:01.118) 0:00:01.118 ********** 2026-04-17 06:33:34.132727 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:33:34.132735 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:33:34.132741 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:33:34.132747 | orchestrator | 2026-04-17 06:33:34.132754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 06:33:34.132761 | orchestrator | Friday 17 April 2026 06:32:59 +0000 (0:00:01.073) 0:00:02.191 ********** 2026-04-17 06:33:34.132769 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-17 06:33:34.132776 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-17 06:33:34.132803 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-17 06:33:34.132810 | orchestrator | 2026-04-17 06:33:34.132817 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-17 06:33:34.132824 | orchestrator | 2026-04-17 06:33:34.132830 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 06:33:34.132837 | orchestrator | Friday 17 April 2026 06:33:00 +0000 (0:00:00.773) 0:00:02.965 ********** 2026-04-17 06:33:34.132845 | orchestrator | included: /ansible/roles/placement/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:33:34.132852 | orchestrator | 2026-04-17 06:33:34.132859 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-17 06:33:34.132866 | orchestrator | Friday 17 April 2026 06:33:01 +0000 (0:00:01.254) 0:00:04.220 ********** 2026-04-17 06:33:34.132872 | orchestrator | ok: [testbed-node-0] => (item=placement (placement)) 2026-04-17 06:33:34.132879 | orchestrator | 2026-04-17 06:33:34.132886 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-17 06:33:34.132893 | orchestrator | Friday 17 April 2026 06:33:06 +0000 (0:00:04.230) 0:00:08.450 ********** 2026-04-17 06:33:34.132900 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-17 06:33:34.132909 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-17 06:33:34.132916 | orchestrator | 2026-04-17 06:33:34.132922 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-17 06:33:34.132930 | orchestrator | Friday 17 April 2026 06:33:13 +0000 (0:00:07.041) 0:00:15.491 ********** 2026-04-17 06:33:34.132935 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 06:33:34.132939 | orchestrator | 2026-04-17 06:33:34.132943 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-17 06:33:34.132947 | orchestrator | Friday 17 April 2026 06:33:16 +0000 (0:00:03.340) 0:00:18.832 ********** 2026-04-17 06:33:34.132973 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-17 06:33:34.132979 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 06:33:34.132985 | orchestrator | 2026-04-17 06:33:34.132991 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-17 06:33:34.132996 | orchestrator | Friday 17 April 2026 06:33:21 +0000 (0:00:05.131) 0:00:23.963 ********** 2026-04-17 06:33:34.133002 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 06:33:34.133012 | orchestrator | 2026-04-17 06:33:34.133020 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-17 06:33:34.133026 | orchestrator | Friday 17 April 2026 06:33:24 +0000 (0:00:03.188) 0:00:27.151 ********** 2026-04-17 06:33:34.133032 | orchestrator | ok: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-17 06:33:34.133039 | orchestrator | 2026-04-17 06:33:34.133045 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 06:33:34.133055 | orchestrator | Friday 17 April 2026 06:33:29 +0000 (0:00:04.626) 0:00:31.778 ********** 2026-04-17 06:33:34.133062 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:33:34.133069 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:33:34.133075 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:33:34.133081 | orchestrator | 2026-04-17 06:33:34.133088 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-17 06:33:34.133095 | orchestrator | Friday 17 April 2026 06:33:30 +0000 (0:00:00.701) 0:00:32.480 ********** 2026-04-17 06:33:34.133134 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:34.133151 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:34.133158 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:34.133170 | orchestrator | 2026-04-17 06:33:34.133214 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-17 06:33:34.133221 | orchestrator | Friday 17 April 2026 06:33:31 +0000 (0:00:01.201) 0:00:33.681 ********** 2026-04-17 06:33:34.133226 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:33:34.133230 | orchestrator | 2026-04-17 06:33:34.133235 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-17 06:33:34.133239 | orchestrator | Friday 17 April 2026 06:33:31 +0000 (0:00:00.141) 0:00:33.823 ********** 2026-04-17 06:33:34.133244 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:33:34.133249 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:33:34.133253 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:33:34.133258 | orchestrator | 2026-04-17 06:33:34.133262 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 06:33:34.133267 | orchestrator | Friday 17 April 2026 06:33:31 +0000 (0:00:00.325) 0:00:34.148 ********** 2026-04-17 06:33:34.133271 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:33:34.133276 | orchestrator | 2026-04-17 06:33:34.133280 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-17 06:33:34.133284 | orchestrator | Friday 17 April 2026 06:33:32 +0000 (0:00:01.145) 0:00:35.294 ********** 2026-04-17 06:33:34.133294 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:35.807121 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:35.807350 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:35.807369 | orchestrator | 2026-04-17 06:33:35.807383 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-17 06:33:35.807397 | orchestrator | Friday 17 April 2026 06:33:34 +0000 (0:00:01.475) 0:00:36.769 ********** 2026-04-17 06:33:35.807410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:35.807423 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:33:35.807459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:35.807472 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:33:35.807484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:35.807504 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:33:35.807516 | orchestrator | 2026-04-17 06:33:35.807527 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-17 06:33:35.807538 | orchestrator | Friday 17 April 2026 06:33:35 +0000 (0:00:00.930) 0:00:37.699 ********** 2026-04-17 06:33:35.807549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:35.807561 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:33:35.807695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:35.807718 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:33:35.807751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:45.626707 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:33:45.626888 | orchestrator | 2026-04-17 06:33:45.626915 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-17 06:33:45.626933 | orchestrator | Friday 17 April 2026 06:33:36 +0000 (0:00:00.726) 0:00:38.425 ********** 2026-04-17 06:33:45.626948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:45.626964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:45.626976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:45.627019 | orchestrator | 2026-04-17 06:33:45.627030 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-17 06:33:45.627040 | orchestrator | Friday 17 April 2026 06:33:37 +0000 (0:00:01.494) 0:00:39.920 ********** 2026-04-17 06:33:45.627092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:45.627104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:45.627116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:45.627126 | orchestrator | 2026-04-17 06:33:45.627136 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-17 06:33:45.627147 | orchestrator | Friday 17 April 2026 06:33:40 +0000 (0:00:02.796) 0:00:42.716 ********** 2026-04-17 06:33:45.627156 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-17 06:33:45.627167 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:33:45.627177 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-17 06:33:45.627222 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:33:45.627234 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-17 06:33:45.627245 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:33:45.627256 | orchestrator | 2026-04-17 06:33:45.627267 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-17 06:33:45.627278 | orchestrator | Friday 17 April 2026 06:33:40 +0000 (0:00:00.583) 0:00:43.300 ********** 2026-04-17 06:33:45.627289 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:33:45.627302 | orchestrator | 2026-04-17 06:33:45.627312 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-17 06:33:45.627323 | orchestrator | Friday 17 April 2026 06:33:42 +0000 (0:00:01.191) 0:00:44.491 ********** 2026-04-17 06:33:45.627335 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:33:45.627345 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:33:45.627356 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:33:45.627367 | orchestrator | 2026-04-17 06:33:45.627377 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-17 06:33:45.627404 | orchestrator | Friday 17 April 2026 06:33:44 +0000 (0:00:02.149) 0:00:46.641 ********** 2026-04-17 06:33:45.627417 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:33:45.627435 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:33:45.627451 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:33:45.627466 | orchestrator | 2026-04-17 06:33:45.627492 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-17 06:33:48.920603 | orchestrator | Friday 17 April 2026 06:33:45 +0000 (0:00:01.303) 0:00:47.944 ********** 2026-04-17 06:33:48.920769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:48.920801 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:33:48.920823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:48.920888 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:33:48.920911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:33:48.920932 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:33:48.920951 | orchestrator | 2026-04-17 06:33:48.920971 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-17 06:33:48.920990 | orchestrator | Friday 17 April 2026 06:33:46 +0000 (0:00:01.323) 0:00:49.268 ********** 2026-04-17 06:33:48.921059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:48.921083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:48.921105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 06:33:48.921139 | orchestrator | 2026-04-17 06:33:48.921159 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-17 06:33:48.921178 | orchestrator | Friday 17 April 2026 06:33:48 +0000 (0:00:01.337) 0:00:50.605 ********** 2026-04-17 06:33:48.921225 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 06:33:48.921244 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:33:48.921263 | orchestrator | } 2026-04-17 06:33:48.921282 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 06:33:48.921300 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:33:48.921318 | orchestrator | } 2026-04-17 06:33:48.921336 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 06:33:48.921353 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:33:48.921372 | orchestrator | } 2026-04-17 06:33:48.921392 | orchestrator | 2026-04-17 06:33:48.921410 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 06:33:48.921428 | orchestrator | Friday 17 April 2026 06:33:48 +0000 (0:00:00.360) 0:00:50.966 ********** 2026-04-17 06:33:48.921468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:34:31.831894 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:34:31.832021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:34:31.832043 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:34:31.832057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 06:34:31.832092 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:34:31.832104 | orchestrator | 2026-04-17 06:34:31.832117 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-17 06:34:31.832129 | orchestrator | Friday 17 April 2026 06:33:50 +0000 (0:00:01.373) 0:00:52.340 ********** 2026-04-17 06:34:31.832140 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:34:31.832151 | orchestrator | 2026-04-17 06:34:31.832162 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-17 06:34:31.832173 | orchestrator | Friday 17 April 2026 06:33:52 +0000 (0:00:02.086) 0:00:54.426 ********** 2026-04-17 06:34:31.832184 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:34:31.832249 | orchestrator | 2026-04-17 06:34:31.832263 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-17 06:34:31.832274 | orchestrator | Friday 17 April 2026 06:33:54 +0000 (0:00:02.450) 0:00:56.876 ********** 2026-04-17 06:34:31.832285 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:34:31.832296 | orchestrator | 2026-04-17 06:34:31.832307 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 06:34:31.832317 | orchestrator | Friday 17 April 2026 06:34:08 +0000 (0:00:13.604) 0:01:10.481 ********** 2026-04-17 06:34:31.832328 | orchestrator | 2026-04-17 06:34:31.832338 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 06:34:31.832349 | orchestrator | Friday 17 April 2026 06:34:08 +0000 (0:00:00.076) 0:01:10.557 ********** 2026-04-17 06:34:31.832359 | orchestrator | 2026-04-17 06:34:31.832370 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 06:34:31.832381 | orchestrator | Friday 17 April 2026 06:34:08 +0000 (0:00:00.093) 0:01:10.650 ********** 2026-04-17 06:34:31.832391 | orchestrator | 2026-04-17 06:34:31.832402 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-17 06:34:31.832413 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 06:34:31.832427 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 06:34:31.832468 | orchestrator | Friday 17 April 2026 06:34:08 +0000 (0:00:00.092) 0:01:10.743 ********** 2026-04-17 06:34:31.832480 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:34:31.832493 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:34:31.832505 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:34:31.832518 | orchestrator | 2026-04-17 06:34:31.832530 | orchestrator | TASK [placement : Perform Placement online data migration] ********************* 2026-04-17 06:34:31.832542 | orchestrator | Friday 17 April 2026 06:34:19 +0000 (0:00:11.570) 0:01:22.314 ********** 2026-04-17 06:34:31.832554 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:34:31.832566 | orchestrator | 2026-04-17 06:34:31.832595 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 06:34:31.832609 | orchestrator | testbed-node-0 : ok=24  changed=9  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-17 06:34:31.832632 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 06:34:31.832644 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 06:34:31.832656 | orchestrator | 2026-04-17 06:34:31.832668 | orchestrator | 2026-04-17 06:34:31.832681 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 06:34:31.832693 | orchestrator | Friday 17 April 2026 06:34:31 +0000 (0:00:11.491) 0:01:33.806 ********** 2026-04-17 06:34:31.832705 | orchestrator | =============================================================================== 2026-04-17 06:34:31.832717 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.60s 2026-04-17 06:34:31.832729 | orchestrator | placement : Restart placement-api container ---------------------------- 11.57s 2026-04-17 06:34:31.832742 | orchestrator | placement : Perform Placement online data migration -------------------- 11.49s 2026-04-17 06:34:31.832753 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 7.04s 2026-04-17 06:34:31.832766 | orchestrator | service-ks-register : placement | Creating users ------------------------ 5.13s 2026-04-17 06:34:31.832778 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 4.63s 2026-04-17 06:34:31.832791 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 4.23s 2026-04-17 06:34:31.832803 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.34s 2026-04-17 06:34:31.832814 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.19s 2026-04-17 06:34:31.832825 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.80s 2026-04-17 06:34:31.832835 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.45s 2026-04-17 06:34:31.832846 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.15s 2026-04-17 06:34:31.832857 | orchestrator | placement : Creating placement databases -------------------------------- 2.09s 2026-04-17 06:34:31.832867 | orchestrator | placement : Copying over config.json files for services ----------------- 1.49s 2026-04-17 06:34:31.832878 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.48s 2026-04-17 06:34:31.832889 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.37s 2026-04-17 06:34:31.832899 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.34s 2026-04-17 06:34:31.832910 | orchestrator | placement : Copying over existing policy file --------------------------- 1.32s 2026-04-17 06:34:31.832921 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2026-04-17 06:34:31.832931 | orchestrator | placement : include_tasks ----------------------------------------------- 1.25s 2026-04-17 06:34:32.045833 | orchestrator | + osism apply -a upgrade neutron 2026-04-17 06:34:33.377091 | orchestrator | 2026-04-17 06:34:33 | INFO  | Prepare task for execution of neutron. 2026-04-17 06:34:33.446401 | orchestrator | 2026-04-17 06:34:33 | INFO  | Task 9ae96d39-528c-4a34-8897-6e7dbfdf3207 (neutron) was prepared for execution. 2026-04-17 06:34:33.446522 | orchestrator | 2026-04-17 06:34:33 | INFO  | It takes a moment until task 9ae96d39-528c-4a34-8897-6e7dbfdf3207 (neutron) has been started and output is visible here. 2026-04-17 06:35:14.173197 | orchestrator | 2026-04-17 06:35:14.173366 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 06:35:14.173382 | orchestrator | 2026-04-17 06:35:14.173393 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 06:35:14.173403 | orchestrator | Friday 17 April 2026 06:34:38 +0000 (0:00:01.915) 0:00:01.915 ********** 2026-04-17 06:35:14.173413 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:35:14.173446 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:35:14.173456 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:35:14.173466 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:35:14.173475 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:35:14.173485 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:35:14.173494 | orchestrator | 2026-04-17 06:35:14.173504 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 06:35:14.173513 | orchestrator | Friday 17 April 2026 06:34:41 +0000 (0:00:02.497) 0:00:04.412 ********** 2026-04-17 06:35:14.173523 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-17 06:35:14.173533 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-17 06:35:14.173543 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-17 06:35:14.173553 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-17 06:35:14.173575 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-17 06:35:14.173586 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-17 06:35:14.173595 | orchestrator | 2026-04-17 06:35:14.173605 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-17 06:35:14.173615 | orchestrator | 2026-04-17 06:35:14.173624 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 06:35:14.173634 | orchestrator | Friday 17 April 2026 06:34:45 +0000 (0:00:04.068) 0:00:08.480 ********** 2026-04-17 06:35:14.173644 | orchestrator | included: /ansible/roles/neutron/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:35:14.173654 | orchestrator | 2026-04-17 06:35:14.173664 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-17 06:35:14.173674 | orchestrator | Friday 17 April 2026 06:34:49 +0000 (0:00:04.140) 0:00:12.621 ********** 2026-04-17 06:35:14.173684 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:35:14.173693 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:35:14.173703 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:35:14.173712 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:35:14.173722 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:35:14.173731 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:35:14.173741 | orchestrator | 2026-04-17 06:35:14.173750 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-17 06:35:14.173760 | orchestrator | Friday 17 April 2026 06:34:52 +0000 (0:00:03.005) 0:00:15.627 ********** 2026-04-17 06:35:14.173769 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:35:14.173779 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:35:14.173789 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:35:14.173799 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:35:14.173808 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:35:14.173818 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:35:14.173827 | orchestrator | 2026-04-17 06:35:14.173837 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-17 06:35:14.173846 | orchestrator | Friday 17 April 2026 06:34:54 +0000 (0:00:02.427) 0:00:18.054 ********** 2026-04-17 06:35:14.173856 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 06:35:14.173866 | orchestrator |  "changed": false, 2026-04-17 06:35:14.173876 | orchestrator |  "msg": "All assertions passed" 2026-04-17 06:35:14.173885 | orchestrator | } 2026-04-17 06:35:14.173895 | orchestrator | ok: [testbed-node-1] => { 2026-04-17 06:35:14.173905 | orchestrator |  "changed": false, 2026-04-17 06:35:14.173914 | orchestrator |  "msg": "All assertions passed" 2026-04-17 06:35:14.173924 | orchestrator | } 2026-04-17 06:35:14.173933 | orchestrator | ok: [testbed-node-2] => { 2026-04-17 06:35:14.173942 | orchestrator |  "changed": false, 2026-04-17 06:35:14.173952 | orchestrator |  "msg": "All assertions passed" 2026-04-17 06:35:14.173961 | orchestrator | } 2026-04-17 06:35:14.173971 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 06:35:14.173980 | orchestrator |  "changed": false, 2026-04-17 06:35:14.173990 | orchestrator |  "msg": "All assertions passed" 2026-04-17 06:35:14.174006 | orchestrator | } 2026-04-17 06:35:14.174073 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 06:35:14.174084 | orchestrator |  "changed": false, 2026-04-17 06:35:14.174093 | orchestrator |  "msg": "All assertions passed" 2026-04-17 06:35:14.174103 | orchestrator | } 2026-04-17 06:35:14.174112 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 06:35:14.174122 | orchestrator |  "changed": false, 2026-04-17 06:35:14.174132 | orchestrator |  "msg": "All assertions passed" 2026-04-17 06:35:14.174141 | orchestrator | } 2026-04-17 06:35:14.174151 | orchestrator | 2026-04-17 06:35:14.174161 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-17 06:35:14.174171 | orchestrator | Friday 17 April 2026 06:34:57 +0000 (0:00:02.132) 0:00:20.187 ********** 2026-04-17 06:35:14.174181 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:14.174190 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:14.174200 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:14.174231 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:35:14.174241 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:14.174250 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:14.174260 | orchestrator | 2026-04-17 06:35:14.174270 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 06:35:14.174279 | orchestrator | Friday 17 April 2026 06:34:59 +0000 (0:00:02.281) 0:00:22.469 ********** 2026-04-17 06:35:14.174289 | orchestrator | included: /ansible/roles/neutron/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:35:14.174301 | orchestrator | 2026-04-17 06:35:14.174310 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-17 06:35:14.174320 | orchestrator | Friday 17 April 2026 06:35:02 +0000 (0:00:02.720) 0:00:25.189 ********** 2026-04-17 06:35:14.174329 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:14.174339 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:14.174348 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:14.174358 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:14.174384 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:35:14.174395 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:14.174404 | orchestrator | 2026-04-17 06:35:14.174414 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-17 06:35:14.174424 | orchestrator | Friday 17 April 2026 06:35:05 +0000 (0:00:03.552) 0:00:28.741 ********** 2026-04-17 06:35:14.174433 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:35:14.174443 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:35:14.174453 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:35:14.174462 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:35:14.174472 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:35:14.174481 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:35:14.174491 | orchestrator | 2026-04-17 06:35:14.174500 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-17 06:35:14.174510 | orchestrator | Friday 17 April 2026 06:35:07 +0000 (0:00:02.060) 0:00:30.802 ********** 2026-04-17 06:35:14.174520 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:14.174529 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:14.174539 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:14.174549 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:14.174558 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:35:14.174568 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:14.174577 | orchestrator | 2026-04-17 06:35:14.174587 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-17 06:35:14.174602 | orchestrator | Friday 17 April 2026 06:35:11 +0000 (0:00:04.119) 0:00:34.922 ********** 2026-04-17 06:35:14.174618 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:14.174641 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:14.174652 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:14.174672 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:26.621528 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:26.621634 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:26.621644 | orchestrator | 2026-04-17 06:35:26.621652 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-17 06:35:26.621659 | orchestrator | Friday 17 April 2026 06:35:15 +0000 (0:00:04.119) 0:00:39.042 ********** 2026-04-17 06:35:26.621666 | orchestrator | [WARNING]: Skipped 2026-04-17 06:35:26.621673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-17 06:35:26.621681 | orchestrator | due to this access issue: 2026-04-17 06:35:26.621688 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-17 06:35:26.621695 | orchestrator | a directory 2026-04-17 06:35:26.621701 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 06:35:26.621707 | orchestrator | 2026-04-17 06:35:26.621714 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 06:35:26.621720 | orchestrator | Friday 17 April 2026 06:35:18 +0000 (0:00:02.385) 0:00:41.427 ********** 2026-04-17 06:35:26.621727 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:35:26.621734 | orchestrator | 2026-04-17 06:35:26.621741 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-17 06:35:26.621747 | orchestrator | Friday 17 April 2026 06:35:21 +0000 (0:00:02.918) 0:00:44.345 ********** 2026-04-17 06:35:26.621755 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:26.621781 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:26.621794 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:26.621801 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:26.621808 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:26.621814 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:26.621821 | orchestrator | 2026-04-17 06:35:26.621827 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-17 06:35:26.621837 | orchestrator | Friday 17 April 2026 06:35:25 +0000 (0:00:03.962) 0:00:48.308 ********** 2026-04-17 06:35:26.621852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:31.089687 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:31.089795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:31.089814 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:31.089827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:31.089839 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:31.089851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:31.089888 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:35:31.089915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:31.089927 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:31.089959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:31.089972 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:31.089983 | orchestrator | 2026-04-17 06:35:31.089995 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-17 06:35:31.090007 | orchestrator | Friday 17 April 2026 06:35:28 +0000 (0:00:03.718) 0:00:52.027 ********** 2026-04-17 06:35:31.090079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:31.090104 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:31.090127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:31.090148 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:31.090165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:31.090177 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:31.090198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:41.742825 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:35:41.742913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:41.742925 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:41.742932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:41.742956 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:41.742963 | orchestrator | 2026-04-17 06:35:41.742969 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-17 06:35:41.742976 | orchestrator | Friday 17 April 2026 06:35:32 +0000 (0:00:03.888) 0:00:55.915 ********** 2026-04-17 06:35:41.742982 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:41.742988 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:41.742993 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:41.742999 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:41.743004 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:35:41.743010 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:41.743016 | orchestrator | 2026-04-17 06:35:41.743022 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-17 06:35:41.743027 | orchestrator | Friday 17 April 2026 06:35:36 +0000 (0:00:03.304) 0:00:59.220 ********** 2026-04-17 06:35:41.743033 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:41.743039 | orchestrator | 2026-04-17 06:35:41.743044 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-17 06:35:41.743050 | orchestrator | Friday 17 April 2026 06:35:37 +0000 (0:00:01.126) 0:01:00.346 ********** 2026-04-17 06:35:41.743056 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:41.743062 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:41.743071 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:41.743080 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:35:41.743089 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:41.743098 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:41.743107 | orchestrator | 2026-04-17 06:35:41.743116 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-17 06:35:41.743125 | orchestrator | Friday 17 April 2026 06:35:39 +0000 (0:00:02.045) 0:01:02.392 ********** 2026-04-17 06:35:41.743152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:41.743164 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:41.743190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:41.743208 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:41.743270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:41.743283 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:41.743292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:41.743301 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:35:41.743315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:41.743325 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:41.743345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:52.566908 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:52.567040 | orchestrator | 2026-04-17 06:35:52.567058 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-17 06:35:52.567071 | orchestrator | Friday 17 April 2026 06:35:42 +0000 (0:00:03.618) 0:01:06.011 ********** 2026-04-17 06:35:52.567865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:52.567916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:52.567946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:52.567958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:52.567991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:52.568017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:52.568028 | orchestrator | 2026-04-17 06:35:52.568039 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-17 06:35:52.568049 | orchestrator | Friday 17 April 2026 06:35:47 +0000 (0:00:04.513) 0:01:10.525 ********** 2026-04-17 06:35:52.568065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:52.568076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:52.568095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:56.649931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:35:56.650007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:56.650071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:35:56.650079 | orchestrator | 2026-04-17 06:35:56.650086 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-17 06:35:56.650092 | orchestrator | Friday 17 April 2026 06:35:54 +0000 (0:00:07.150) 0:01:17.676 ********** 2026-04-17 06:35:56.650098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:56.650121 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:35:56.650147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:56.650157 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:35:56.650166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:35:56.650175 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:35:56.650188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:56.650197 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:35:56.650206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:35:56.650219 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:35:56.650307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:36:24.740202 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:24.740365 | orchestrator | 2026-04-17 06:36:24.740384 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-17 06:36:24.740396 | orchestrator | Friday 17 April 2026 06:35:57 +0000 (0:00:03.222) 0:01:20.898 ********** 2026-04-17 06:36:24.740408 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:24.740419 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:24.740430 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:36:24.740441 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:24.740452 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:36:24.740463 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:36:24.740474 | orchestrator | 2026-04-17 06:36:24.740485 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-17 06:36:24.740496 | orchestrator | Friday 17 April 2026 06:36:01 +0000 (0:00:03.941) 0:01:24.840 ********** 2026-04-17 06:36:24.740509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:36:24.740524 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:24.740551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:36:24.740564 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:24.740575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:36:24.740609 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:24.740624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:36:24.740657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:36:24.740671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:36:24.740683 | orchestrator | 2026-04-17 06:36:24.740699 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-17 06:36:24.740711 | orchestrator | Friday 17 April 2026 06:36:06 +0000 (0:00:04.864) 0:01:29.704 ********** 2026-04-17 06:36:24.740732 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:36:24.740744 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:36:24.740757 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:24.740769 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:36:24.740781 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:24.740792 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:24.740804 | orchestrator | 2026-04-17 06:36:24.740815 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-17 06:36:24.740827 | orchestrator | Friday 17 April 2026 06:36:10 +0000 (0:00:03.683) 0:01:33.388 ********** 2026-04-17 06:36:24.740840 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:36:24.740852 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:24.740865 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:36:24.740876 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:36:24.740888 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:24.740900 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:24.740912 | orchestrator | 2026-04-17 06:36:24.740923 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-17 06:36:24.740936 | orchestrator | Friday 17 April 2026 06:36:14 +0000 (0:00:03.857) 0:01:37.246 ********** 2026-04-17 06:36:24.740948 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:36:24.740960 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:36:24.740972 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:36:24.740983 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:24.740996 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:24.741008 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:24.741020 | orchestrator | 2026-04-17 06:36:24.741032 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-17 06:36:24.741045 | orchestrator | Friday 17 April 2026 06:36:17 +0000 (0:00:03.488) 0:01:40.734 ********** 2026-04-17 06:36:24.741057 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:36:24.741069 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:36:24.741081 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:24.741092 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:36:24.741103 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:24.741113 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:24.741124 | orchestrator | 2026-04-17 06:36:24.741134 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-17 06:36:24.741145 | orchestrator | Friday 17 April 2026 06:36:21 +0000 (0:00:03.568) 0:01:44.302 ********** 2026-04-17 06:36:24.741155 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:36:24.741166 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:36:24.741177 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:36:24.741187 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:24.741198 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:24.741208 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:24.741219 | orchestrator | 2026-04-17 06:36:24.741247 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-17 06:36:24.741266 | orchestrator | Friday 17 April 2026 06:36:24 +0000 (0:00:03.575) 0:01:47.877 ********** 2026-04-17 06:36:33.958635 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 06:36:33.958747 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:36:33.958764 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 06:36:33.958776 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:36:33.958787 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 06:36:33.958798 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:36:33.958809 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 06:36:33.958820 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:33.958831 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 06:36:33.958863 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:33.958874 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 06:36:33.958885 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:33.958895 | orchestrator | 2026-04-17 06:36:33.958908 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-17 06:36:33.958919 | orchestrator | Friday 17 April 2026 06:36:28 +0000 (0:00:03.760) 0:01:51.638 ********** 2026-04-17 06:36:33.958936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:36:33.958966 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:36:33.958979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:36:33.958991 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:36:33.959004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:36:33.959037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:36:33.959059 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:36:33.959070 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:36:33.959082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:36:33.959094 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:36:33.959110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:36:33.959122 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:36:33.959133 | orchestrator | 2026-04-17 06:36:33.959144 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-17 06:36:33.959155 | orchestrator | Friday 17 April 2026 06:36:32 +0000 (0:00:03.895) 0:01:55.534 ********** 2026-04-17 06:36:33.959167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:36:33.959179 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:36:33.959202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:37:12.618872 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.618980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:37:12.618998 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.619026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:37:12.619039 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:37:12.619060 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.619071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:37:12.619102 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.619113 | orchestrator | 2026-04-17 06:37:12.619124 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-17 06:37:12.619134 | orchestrator | Friday 17 April 2026 06:36:35 +0000 (0:00:03.429) 0:01:58.964 ********** 2026-04-17 06:37:12.619144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.619153 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.619163 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619172 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.619181 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.619190 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.619200 | orchestrator | 2026-04-17 06:37:12.619210 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-17 06:37:12.619234 | orchestrator | Friday 17 April 2026 06:36:39 +0000 (0:00:03.480) 0:02:02.444 ********** 2026-04-17 06:37:12.619244 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.619291 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619301 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.619311 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:37:12.619320 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:37:12.619329 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:37:12.619339 | orchestrator | 2026-04-17 06:37:12.619349 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-17 06:37:12.619358 | orchestrator | Friday 17 April 2026 06:36:45 +0000 (0:00:05.867) 0:02:08.312 ********** 2026-04-17 06:37:12.619367 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.619377 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619387 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.619397 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.619406 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.619416 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.619426 | orchestrator | 2026-04-17 06:37:12.619437 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-17 06:37:12.619453 | orchestrator | Friday 17 April 2026 06:36:48 +0000 (0:00:03.377) 0:02:11.690 ********** 2026-04-17 06:37:12.619469 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619491 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.619516 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.619530 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.619546 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.619561 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.619575 | orchestrator | 2026-04-17 06:37:12.619590 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-17 06:37:12.619605 | orchestrator | Friday 17 April 2026 06:36:52 +0000 (0:00:03.676) 0:02:15.366 ********** 2026-04-17 06:37:12.619619 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.619633 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619648 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.619663 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.619686 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.619702 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.619718 | orchestrator | 2026-04-17 06:37:12.619733 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-17 06:37:12.619750 | orchestrator | Friday 17 April 2026 06:36:55 +0000 (0:00:03.332) 0:02:18.699 ********** 2026-04-17 06:37:12.619765 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619795 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.619811 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.619828 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.619845 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.619861 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.619875 | orchestrator | 2026-04-17 06:37:12.619885 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-17 06:37:12.619895 | orchestrator | Friday 17 April 2026 06:36:59 +0000 (0:00:03.793) 0:02:22.492 ********** 2026-04-17 06:37:12.619904 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.619914 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.619923 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.619933 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619942 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.619952 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.619961 | orchestrator | 2026-04-17 06:37:12.619970 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-17 06:37:12.619980 | orchestrator | Friday 17 April 2026 06:37:02 +0000 (0:00:03.246) 0:02:25.739 ********** 2026-04-17 06:37:12.619989 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.619999 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.620008 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.620017 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.620027 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.620036 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.620046 | orchestrator | 2026-04-17 06:37:12.620055 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-17 06:37:12.620065 | orchestrator | Friday 17 April 2026 06:37:06 +0000 (0:00:03.514) 0:02:29.253 ********** 2026-04-17 06:37:12.620074 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.620084 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.620093 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.620102 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.620111 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.620121 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.620130 | orchestrator | 2026-04-17 06:37:12.620140 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-17 06:37:12.620149 | orchestrator | Friday 17 April 2026 06:37:09 +0000 (0:00:03.595) 0:02:32.848 ********** 2026-04-17 06:37:12.620159 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 06:37:12.620170 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:12.620179 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 06:37:12.620189 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:12.620198 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 06:37:12.620208 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:12.620217 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 06:37:12.620227 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:12.620236 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 06:37:12.620246 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:12.620284 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 06:37:12.620294 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:12.620304 | orchestrator | 2026-04-17 06:37:12.620324 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-17 06:37:19.671499 | orchestrator | Friday 17 April 2026 06:37:13 +0000 (0:00:03.908) 0:02:36.756 ********** 2026-04-17 06:37:19.671614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:37:19.671660 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:19.671691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:37:19.671704 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:19.671716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:37:19.671728 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:19.671741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:37:19.671763 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:37:19.671796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:37:19.671809 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:37:19.671827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:37:19.671839 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:19.671851 | orchestrator | 2026-04-17 06:37:19.671863 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-17 06:37:19.671874 | orchestrator | Friday 17 April 2026 06:37:17 +0000 (0:00:03.763) 0:02:40.520 ********** 2026-04-17 06:37:19.671887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:37:19.671899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:37:19.671921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:37:25.054927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:37:25.055041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:37:25.055060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 06:37:25.055073 | orchestrator | 2026-04-17 06:37:25.055087 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-17 06:37:25.055099 | orchestrator | Friday 17 April 2026 06:37:21 +0000 (0:00:03.746) 0:02:44.266 ********** 2026-04-17 06:37:25.055111 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 06:37:25.055123 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:37:25.055134 | orchestrator | } 2026-04-17 06:37:25.055145 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 06:37:25.055193 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:37:25.055206 | orchestrator | } 2026-04-17 06:37:25.055217 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 06:37:25.055227 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:37:25.055238 | orchestrator | } 2026-04-17 06:37:25.055248 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 06:37:25.055259 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:37:25.055334 | orchestrator | } 2026-04-17 06:37:25.055347 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 06:37:25.055357 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:37:25.055368 | orchestrator | } 2026-04-17 06:37:25.055379 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 06:37:25.055390 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:37:25.055402 | orchestrator | } 2026-04-17 06:37:25.055413 | orchestrator | 2026-04-17 06:37:25.055426 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 06:37:25.055438 | orchestrator | Friday 17 April 2026 06:37:23 +0000 (0:00:01.917) 0:02:46.183 ********** 2026-04-17 06:37:25.055472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:37:25.055487 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:37:25.055509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:37:25.055523 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:37:25.055536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:37:25.055559 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:37:25.055572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:37:25.055586 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:37:25.055606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:40:27.931676 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:40:27.931815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 06:40:27.931838 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:40:27.931852 | orchestrator | 2026-04-17 06:40:27.931865 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 06:40:27.931877 | orchestrator | Friday 17 April 2026 06:37:26 +0000 (0:00:03.877) 0:02:50.061 ********** 2026-04-17 06:40:27.931889 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:40:27.931900 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:40:27.931912 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:40:27.931923 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:40:27.931934 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:40:27.931946 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:40:27.931957 | orchestrator | 2026-04-17 06:40:27.931969 | orchestrator | TASK [neutron : Running Neutron database expand container] ********************* 2026-04-17 06:40:27.931980 | orchestrator | Friday 17 April 2026 06:37:28 +0000 (0:00:01.880) 0:02:51.941 ********** 2026-04-17 06:40:27.931991 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:40:27.932002 | orchestrator | 2026-04-17 06:40:27.932014 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932047 | orchestrator | Friday 17 April 2026 06:38:02 +0000 (0:00:33.822) 0:03:25.763 ********** 2026-04-17 06:40:27.932059 | orchestrator | 2026-04-17 06:40:27.932070 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932081 | orchestrator | Friday 17 April 2026 06:38:03 +0000 (0:00:00.444) 0:03:26.208 ********** 2026-04-17 06:40:27.932092 | orchestrator | 2026-04-17 06:40:27.932103 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932114 | orchestrator | Friday 17 April 2026 06:38:03 +0000 (0:00:00.484) 0:03:26.692 ********** 2026-04-17 06:40:27.932126 | orchestrator | 2026-04-17 06:40:27.932137 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932188 | orchestrator | Friday 17 April 2026 06:38:04 +0000 (0:00:00.626) 0:03:27.319 ********** 2026-04-17 06:40:27.932198 | orchestrator | 2026-04-17 06:40:27.932211 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932223 | orchestrator | Friday 17 April 2026 06:38:04 +0000 (0:00:00.415) 0:03:27.734 ********** 2026-04-17 06:40:27.932236 | orchestrator | 2026-04-17 06:40:27.932247 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932259 | orchestrator | Friday 17 April 2026 06:38:05 +0000 (0:00:00.415) 0:03:28.149 ********** 2026-04-17 06:40:27.932272 | orchestrator | 2026-04-17 06:40:27.932284 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-17 06:40:27.932296 | orchestrator | Friday 17 April 2026 06:38:05 +0000 (0:00:00.822) 0:03:28.972 ********** 2026-04-17 06:40:27.932308 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:40:27.932321 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:40:27.932333 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:40:27.932345 | orchestrator | 2026-04-17 06:40:27.932357 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-17 06:40:27.932369 | orchestrator | Friday 17 April 2026 06:38:54 +0000 (0:00:48.219) 0:04:17.192 ********** 2026-04-17 06:40:27.932381 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:40:27.932394 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:40:27.932406 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:40:27.932418 | orchestrator | 2026-04-17 06:40:27.932431 | orchestrator | TASK [neutron : Checking neutron pending contract scripts] ********************* 2026-04-17 06:40:27.932456 | orchestrator | Friday 17 April 2026 06:39:59 +0000 (0:01:05.077) 0:05:22.269 ********** 2026-04-17 06:40:27.932468 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:40:27.932480 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:40:27.932492 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:40:27.932504 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:40:27.932516 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:40:27.932528 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:40:27.932540 | orchestrator | 2026-04-17 06:40:27.932553 | orchestrator | TASK [neutron : Stopping all neutron-server for contract db] ******************* 2026-04-17 06:40:27.932564 | orchestrator | Friday 17 April 2026 06:40:01 +0000 (0:00:02.255) 0:05:24.525 ********** 2026-04-17 06:40:27.932575 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:40:27.932585 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:40:27.932596 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:40:27.932606 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:40:27.932617 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:40:27.932627 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:40:27.932638 | orchestrator | 2026-04-17 06:40:27.932648 | orchestrator | TASK [neutron : Running Neutron database contract container] ******************* 2026-04-17 06:40:27.932659 | orchestrator | Friday 17 April 2026 06:40:06 +0000 (0:00:04.876) 0:05:29.402 ********** 2026-04-17 06:40:27.932669 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:40:27.932680 | orchestrator | 2026-04-17 06:40:27.932691 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932718 | orchestrator | Friday 17 April 2026 06:40:21 +0000 (0:00:15.528) 0:05:44.930 ********** 2026-04-17 06:40:27.932739 | orchestrator | 2026-04-17 06:40:27.932750 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932760 | orchestrator | Friday 17 April 2026 06:40:22 +0000 (0:00:00.464) 0:05:45.394 ********** 2026-04-17 06:40:27.932771 | orchestrator | 2026-04-17 06:40:27.932782 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932792 | orchestrator | Friday 17 April 2026 06:40:22 +0000 (0:00:00.455) 0:05:45.850 ********** 2026-04-17 06:40:27.932803 | orchestrator | 2026-04-17 06:40:27.932813 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932824 | orchestrator | Friday 17 April 2026 06:40:23 +0000 (0:00:00.455) 0:05:46.306 ********** 2026-04-17 06:40:27.932834 | orchestrator | 2026-04-17 06:40:27.932845 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932855 | orchestrator | Friday 17 April 2026 06:40:23 +0000 (0:00:00.449) 0:05:46.756 ********** 2026-04-17 06:40:27.932866 | orchestrator | 2026-04-17 06:40:27.932883 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 06:40:27.932894 | orchestrator | Friday 17 April 2026 06:40:24 +0000 (0:00:00.480) 0:05:47.236 ********** 2026-04-17 06:40:27.932904 | orchestrator | 2026-04-17 06:40:27.932915 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 06:40:27.932926 | orchestrator | Friday 17 April 2026 06:40:24 +0000 (0:00:00.782) 0:05:48.018 ********** 2026-04-17 06:40:27.932936 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:40:27.932947 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:40:27.932957 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:40:27.932968 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:40:27.932979 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:40:27.932989 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:40:27.933000 | orchestrator | 2026-04-17 06:40:27.933010 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 06:40:27.933022 | orchestrator | testbed-node-0 : ok=21  changed=8  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-17 06:40:27.933034 | orchestrator | testbed-node-1 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-17 06:40:27.933044 | orchestrator | testbed-node-2 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-17 06:40:27.933055 | orchestrator | testbed-node-3 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-17 06:40:27.933066 | orchestrator | testbed-node-4 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-17 06:40:27.933076 | orchestrator | testbed-node-5 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-17 06:40:27.933087 | orchestrator | 2026-04-17 06:40:27.933098 | orchestrator | 2026-04-17 06:40:27.933108 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 06:40:27.933119 | orchestrator | Friday 17 April 2026 06:40:27 +0000 (0:00:03.032) 0:05:51.051 ********** 2026-04-17 06:40:27.933130 | orchestrator | =============================================================================== 2026-04-17 06:40:27.933160 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 65.08s 2026-04-17 06:40:27.933171 | orchestrator | neutron : Restart neutron-server container ----------------------------- 48.22s 2026-04-17 06:40:27.933182 | orchestrator | neutron : Running Neutron database expand container -------------------- 33.82s 2026-04-17 06:40:27.933193 | orchestrator | neutron : Running Neutron database contract container ------------------ 15.53s 2026-04-17 06:40:27.933203 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.15s 2026-04-17 06:40:27.933221 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.87s 2026-04-17 06:40:27.933231 | orchestrator | neutron : Stopping all neutron-server for contract db ------------------- 4.88s 2026-04-17 06:40:27.933242 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.86s 2026-04-17 06:40:27.933252 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.51s 2026-04-17 06:40:27.933263 | orchestrator | neutron : include_tasks ------------------------------------------------- 4.14s 2026-04-17 06:40:27.933274 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.12s 2026-04-17 06:40:27.933284 | orchestrator | Setting sysctl values --------------------------------------------------- 4.12s 2026-04-17 06:40:27.933295 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.07s 2026-04-17 06:40:27.933305 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.96s 2026-04-17 06:40:27.933316 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.94s 2026-04-17 06:40:27.933327 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.91s 2026-04-17 06:40:27.933337 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.90s 2026-04-17 06:40:27.933348 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.89s 2026-04-17 06:40:27.933359 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.88s 2026-04-17 06:40:27.933376 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.86s 2026-04-17 06:40:28.678307 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-17 06:40:28.678415 | orchestrator | + osism apply -a reconfigure nova 2026-04-17 06:40:29.979494 | orchestrator | 2026-04-17 06:40:29 | INFO  | Prepare task for execution of nova. 2026-04-17 06:40:30.053273 | orchestrator | 2026-04-17 06:40:30 | INFO  | Task 509de3db-dada-4ba2-9ef4-a32d5e3e4488 (nova) was prepared for execution. 2026-04-17 06:40:30.053348 | orchestrator | 2026-04-17 06:40:30 | INFO  | It takes a moment until task 509de3db-dada-4ba2-9ef4-a32d5e3e4488 (nova) has been started and output is visible here. 2026-04-17 06:42:51.568790 | orchestrator | 2026-04-17 06:42:51.568941 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 06:42:51.569000 | orchestrator | 2026-04-17 06:42:51.569025 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-17 06:42:51.569039 | orchestrator | Friday 17 April 2026 06:40:35 +0000 (0:00:01.689) 0:00:01.689 ********** 2026-04-17 06:42:51.569067 | orchestrator | changed: [testbed-manager] 2026-04-17 06:42:51.569079 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:42:51.569090 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:42:51.569101 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:42:51.569112 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:42:51.569122 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:42:51.569133 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:42:51.569144 | orchestrator | 2026-04-17 06:42:51.569155 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 06:42:51.569166 | orchestrator | Friday 17 April 2026 06:40:38 +0000 (0:00:03.539) 0:00:05.229 ********** 2026-04-17 06:42:51.569176 | orchestrator | changed: [testbed-manager] 2026-04-17 06:42:51.569187 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:42:51.569198 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:42:51.569208 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:42:51.569219 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:42:51.569229 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:42:51.569241 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:42:51.569252 | orchestrator | 2026-04-17 06:42:51.569263 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 06:42:51.569274 | orchestrator | Friday 17 April 2026 06:40:40 +0000 (0:00:02.040) 0:00:07.269 ********** 2026-04-17 06:42:51.569309 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-17 06:42:51.569321 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-17 06:42:51.569334 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-17 06:42:51.569346 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-17 06:42:51.569358 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-17 06:42:51.569370 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-17 06:42:51.569382 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-17 06:42:51.569394 | orchestrator | 2026-04-17 06:42:51.569407 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-17 06:42:51.569420 | orchestrator | 2026-04-17 06:42:51.569431 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-17 06:42:51.569443 | orchestrator | Friday 17 April 2026 06:40:43 +0000 (0:00:02.492) 0:00:09.762 ********** 2026-04-17 06:42:51.569456 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:42:51.569467 | orchestrator | 2026-04-17 06:42:51.569479 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-17 06:42:51.569491 | orchestrator | Friday 17 April 2026 06:40:46 +0000 (0:00:03.358) 0:00:13.121 ********** 2026-04-17 06:42:51.569502 | orchestrator | ok: [testbed-node-0] => (item=nova_cell0) 2026-04-17 06:42:51.569513 | orchestrator | ok: [testbed-node-0] => (item=nova_api) 2026-04-17 06:42:51.569524 | orchestrator | 2026-04-17 06:42:51.569534 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-17 06:42:51.569545 | orchestrator | Friday 17 April 2026 06:40:51 +0000 (0:00:05.026) 0:00:18.147 ********** 2026-04-17 06:42:51.569556 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 06:42:51.569566 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 06:42:51.569577 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.569588 | orchestrator | 2026-04-17 06:42:51.569604 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-17 06:42:51.569623 | orchestrator | Friday 17 April 2026 06:40:57 +0000 (0:00:05.329) 0:00:23.477 ********** 2026-04-17 06:42:51.569641 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.569659 | orchestrator | 2026-04-17 06:42:51.569676 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-17 06:42:51.569694 | orchestrator | Friday 17 April 2026 06:40:58 +0000 (0:00:01.630) 0:00:25.108 ********** 2026-04-17 06:42:51.569712 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.569731 | orchestrator | 2026-04-17 06:42:51.569748 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-17 06:42:51.569767 | orchestrator | Friday 17 April 2026 06:41:00 +0000 (0:00:02.134) 0:00:27.243 ********** 2026-04-17 06:42:51.569785 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:42:51.569805 | orchestrator | 2026-04-17 06:42:51.569817 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 06:42:51.569828 | orchestrator | Friday 17 April 2026 06:41:04 +0000 (0:00:03.929) 0:00:31.172 ********** 2026-04-17 06:42:51.569838 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:42:51.569849 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.569859 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.569886 | orchestrator | 2026-04-17 06:42:51.569908 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-17 06:42:51.569919 | orchestrator | Friday 17 April 2026 06:41:06 +0000 (0:00:01.759) 0:00:32.932 ********** 2026-04-17 06:42:51.569929 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.569940 | orchestrator | 2026-04-17 06:42:51.569951 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-17 06:42:51.569962 | orchestrator | Friday 17 April 2026 06:41:40 +0000 (0:00:34.042) 0:01:06.974 ********** 2026-04-17 06:42:51.569994 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.570006 | orchestrator | 2026-04-17 06:42:51.570085 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 06:42:51.570100 | orchestrator | Friday 17 April 2026 06:41:56 +0000 (0:00:15.549) 0:01:22.524 ********** 2026-04-17 06:42:51.570110 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.570121 | orchestrator | 2026-04-17 06:42:51.570132 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 06:42:51.570142 | orchestrator | Friday 17 April 2026 06:42:11 +0000 (0:00:15.121) 0:01:37.646 ********** 2026-04-17 06:42:51.570153 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.570164 | orchestrator | 2026-04-17 06:42:51.570197 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-17 06:42:51.570208 | orchestrator | Friday 17 April 2026 06:42:13 +0000 (0:00:02.187) 0:01:39.833 ********** 2026-04-17 06:42:51.570219 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:42:51.570229 | orchestrator | 2026-04-17 06:42:51.570240 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 06:42:51.570258 | orchestrator | Friday 17 April 2026 06:42:15 +0000 (0:00:01.655) 0:01:41.488 ********** 2026-04-17 06:42:51.570269 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:42:51.570280 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.570290 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.570301 | orchestrator | 2026-04-17 06:42:51.570312 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-17 06:42:51.570323 | orchestrator | Friday 17 April 2026 06:42:16 +0000 (0:00:01.350) 0:01:42.839 ********** 2026-04-17 06:42:51.570333 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:42:51.570344 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.570354 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.570365 | orchestrator | 2026-04-17 06:42:51.570376 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-17 06:42:51.570386 | orchestrator | 2026-04-17 06:42:51.570397 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-17 06:42:51.570408 | orchestrator | Friday 17 April 2026 06:42:18 +0000 (0:00:01.717) 0:01:44.556 ********** 2026-04-17 06:42:51.570419 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:42:51.570429 | orchestrator | 2026-04-17 06:42:51.570440 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-17 06:42:51.570451 | orchestrator | Friday 17 April 2026 06:42:20 +0000 (0:00:01.848) 0:01:46.404 ********** 2026-04-17 06:42:51.570461 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.570472 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.570483 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.570494 | orchestrator | 2026-04-17 06:42:51.570505 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-17 06:42:51.570515 | orchestrator | Friday 17 April 2026 06:42:23 +0000 (0:00:03.057) 0:01:49.462 ********** 2026-04-17 06:42:51.570526 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.570537 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.570547 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.570558 | orchestrator | 2026-04-17 06:42:51.570568 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-17 06:42:51.570579 | orchestrator | Friday 17 April 2026 06:42:26 +0000 (0:00:03.402) 0:01:52.864 ********** 2026-04-17 06:42:51.570590 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-17 06:42:51.570601 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.570611 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-17 06:42:51.570622 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.570632 | orchestrator | ok: [testbed-node-0] => (item=openstack) 2026-04-17 06:42:51.570643 | orchestrator | 2026-04-17 06:42:51.570654 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-17 06:42:51.570664 | orchestrator | Friday 17 April 2026 06:42:31 +0000 (0:00:04.963) 0:01:57.828 ********** 2026-04-17 06:42:51.570682 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 06:42:51.570692 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.570703 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 06:42:51.570714 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.570724 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 06:42:51.570735 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-17 06:42:51.570746 | orchestrator | 2026-04-17 06:42:51.570756 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-17 06:42:51.570767 | orchestrator | Friday 17 April 2026 06:42:43 +0000 (0:00:12.298) 0:02:10.126 ********** 2026-04-17 06:42:51.570778 | orchestrator | skipping: [testbed-node-0] => (item=openstack)  2026-04-17 06:42:51.570788 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:42:51.570799 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-17 06:42:51.570810 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.570820 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-17 06:42:51.570831 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.570841 | orchestrator | 2026-04-17 06:42:51.570852 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-17 06:42:51.570863 | orchestrator | Friday 17 April 2026 06:42:45 +0000 (0:00:01.809) 0:02:11.936 ********** 2026-04-17 06:42:51.570873 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 06:42:51.570884 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:42:51.570894 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 06:42:51.570905 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.570916 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 06:42:51.570927 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.570937 | orchestrator | 2026-04-17 06:42:51.570948 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-17 06:42:51.570958 | orchestrator | Friday 17 April 2026 06:42:47 +0000 (0:00:01.970) 0:02:13.906 ********** 2026-04-17 06:42:51.570969 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.571035 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.571055 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.571066 | orchestrator | 2026-04-17 06:42:51.571077 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-17 06:42:51.571088 | orchestrator | Friday 17 April 2026 06:42:49 +0000 (0:00:01.796) 0:02:15.703 ********** 2026-04-17 06:42:51.571099 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:42:51.571109 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:42:51.571120 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:42:51.571131 | orchestrator | 2026-04-17 06:42:51.571141 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-17 06:42:51.571152 | orchestrator | Friday 17 April 2026 06:42:51 +0000 (0:00:02.032) 0:02:17.736 ********** 2026-04-17 06:42:51.571170 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:20.284756 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:20.284875 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:44:20.284892 | orchestrator | 2026-04-17 06:44:20.284905 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-17 06:44:20.285002 | orchestrator | Friday 17 April 2026 06:42:55 +0000 (0:00:03.903) 0:02:21.639 ********** 2026-04-17 06:44:20.285032 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:20.285044 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:20.285055 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:44:20.285066 | orchestrator | 2026-04-17 06:44:20.285077 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 06:44:20.285088 | orchestrator | Friday 17 April 2026 06:43:08 +0000 (0:00:12.826) 0:02:34.466 ********** 2026-04-17 06:44:20.285099 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:20.285109 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:20.285120 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:44:20.285151 | orchestrator | 2026-04-17 06:44:20.285163 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 06:44:20.285173 | orchestrator | Friday 17 April 2026 06:43:21 +0000 (0:00:13.450) 0:02:47.917 ********** 2026-04-17 06:44:20.285184 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:44:20.285194 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:20.285205 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:20.285215 | orchestrator | 2026-04-17 06:44:20.285227 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-17 06:44:20.285237 | orchestrator | Friday 17 April 2026 06:43:23 +0000 (0:00:02.350) 0:02:50.267 ********** 2026-04-17 06:44:20.285248 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:44:20.285259 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:20.285270 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:20.285280 | orchestrator | 2026-04-17 06:44:20.285291 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-17 06:44:20.285304 | orchestrator | Friday 17 April 2026 06:43:25 +0000 (0:00:01.986) 0:02:52.253 ********** 2026-04-17 06:44:20.285316 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:20.285328 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:20.285340 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:44:20.285352 | orchestrator | 2026-04-17 06:44:20.285364 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-17 06:44:20.285376 | orchestrator | Friday 17 April 2026 06:43:39 +0000 (0:00:13.588) 0:03:05.842 ********** 2026-04-17 06:44:20.285389 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:44:20.285401 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:20.285413 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:20.285425 | orchestrator | 2026-04-17 06:44:20.285437 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-17 06:44:20.285449 | orchestrator | 2026-04-17 06:44:20.285462 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 06:44:20.285474 | orchestrator | Friday 17 April 2026 06:43:41 +0000 (0:00:01.758) 0:03:07.600 ********** 2026-04-17 06:44:20.285487 | orchestrator | included: /ansible/roles/nova/tasks/reconfigure.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:44:20.285501 | orchestrator | 2026-04-17 06:44:20.285513 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-17 06:44:20.285526 | orchestrator | Friday 17 April 2026 06:43:43 +0000 (0:00:02.054) 0:03:09.655 ********** 2026-04-17 06:44:20.285538 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-17 06:44:20.285550 | orchestrator | ok: [testbed-node-0] => (item=nova (compute)) 2026-04-17 06:44:20.285562 | orchestrator | 2026-04-17 06:44:20.285574 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-17 06:44:20.285587 | orchestrator | Friday 17 April 2026 06:43:47 +0000 (0:00:04.451) 0:03:14.106 ********** 2026-04-17 06:44:20.285599 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-17 06:44:20.285613 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-17 06:44:20.285626 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-17 06:44:20.285639 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-17 06:44:20.285651 | orchestrator | 2026-04-17 06:44:20.285662 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-17 06:44:20.285673 | orchestrator | Friday 17 April 2026 06:43:55 +0000 (0:00:07.763) 0:03:21.870 ********** 2026-04-17 06:44:20.285684 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 06:44:20.285694 | orchestrator | 2026-04-17 06:44:20.285705 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-17 06:44:20.285723 | orchestrator | Friday 17 April 2026 06:43:59 +0000 (0:00:04.377) 0:03:26.247 ********** 2026-04-17 06:44:20.285734 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-17 06:44:20.285745 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 06:44:20.285755 | orchestrator | 2026-04-17 06:44:20.285766 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-17 06:44:20.285777 | orchestrator | Friday 17 April 2026 06:44:05 +0000 (0:00:05.966) 0:03:32.214 ********** 2026-04-17 06:44:20.285787 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 06:44:20.285798 | orchestrator | 2026-04-17 06:44:20.285809 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-17 06:44:20.285820 | orchestrator | Friday 17 April 2026 06:44:10 +0000 (0:00:04.236) 0:03:36.450 ********** 2026-04-17 06:44:20.285830 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-17 06:44:20.285842 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> service) 2026-04-17 06:44:20.285853 | orchestrator | 2026-04-17 06:44:20.285883 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-17 06:44:20.285894 | orchestrator | Friday 17 April 2026 06:44:18 +0000 (0:00:08.411) 0:03:44.862 ********** 2026-04-17 06:44:20.285949 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:20.285968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:20.285982 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:20.286002 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:20.286090 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:32.585168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:32.585290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:32.585332 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:32.585347 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:32.585359 | orchestrator | 2026-04-17 06:44:32.585372 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-17 06:44:32.585399 | orchestrator | Friday 17 April 2026 06:44:22 +0000 (0:00:03.682) 0:03:48.544 ********** 2026-04-17 06:44:32.585410 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:44:32.585423 | orchestrator | 2026-04-17 06:44:32.585434 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-17 06:44:32.585445 | orchestrator | Friday 17 April 2026 06:44:23 +0000 (0:00:01.152) 0:03:49.697 ********** 2026-04-17 06:44:32.585455 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:44:32.585466 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:32.585476 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:32.585487 | orchestrator | 2026-04-17 06:44:32.585497 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-17 06:44:32.585508 | orchestrator | Friday 17 April 2026 06:44:24 +0000 (0:00:01.531) 0:03:51.229 ********** 2026-04-17 06:44:32.585518 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 06:44:32.585529 | orchestrator | 2026-04-17 06:44:32.585540 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-17 06:44:32.585567 | orchestrator | Friday 17 April 2026 06:44:27 +0000 (0:00:02.191) 0:03:53.420 ********** 2026-04-17 06:44:32.585579 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:44:32.585590 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:32.585601 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:32.585612 | orchestrator | 2026-04-17 06:44:32.585623 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 06:44:32.585634 | orchestrator | Friday 17 April 2026 06:44:28 +0000 (0:00:01.374) 0:03:54.795 ********** 2026-04-17 06:44:32.585645 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:44:32.585657 | orchestrator | 2026-04-17 06:44:32.585667 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-17 06:44:32.585678 | orchestrator | Friday 17 April 2026 06:44:30 +0000 (0:00:02.076) 0:03:56.871 ********** 2026-04-17 06:44:32.585690 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:32.585712 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:32.585733 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:32.585759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:35.723040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:35.723146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:35.723179 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:35.723192 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:35.723203 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:35.723236 | orchestrator | 2026-04-17 06:44:35.723254 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-17 06:44:35.723272 | orchestrator | Friday 17 April 2026 06:44:34 +0000 (0:00:04.363) 0:04:01.235 ********** 2026-04-17 06:44:35.723313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:35.723333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:35.723351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:44:35.723361 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:44:35.723374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:35.723402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:37.647070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:44:37.647198 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:37.647247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:37.647270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:37.647319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:44:37.647338 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:37.647355 | orchestrator | 2026-04-17 06:44:37.647373 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-17 06:44:37.647391 | orchestrator | Friday 17 April 2026 06:44:37 +0000 (0:00:02.171) 0:04:03.407 ********** 2026-04-17 06:44:37.647433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:37.647453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:37.647478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:44:37.647496 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:44:37.647514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:37.647556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:41.233718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:44:41.233829 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:44:41.233866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:41.233882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:41.233985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:44:41.234001 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:44:41.234013 | orchestrator | 2026-04-17 06:44:41.234097 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-17 06:44:41.234117 | orchestrator | Friday 17 April 2026 06:44:38 +0000 (0:00:01.884) 0:04:05.291 ********** 2026-04-17 06:44:41.234162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:41.234197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:41.234229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:41.234244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:41.234270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:50.067202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:50.067336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:50.067354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:50.067367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:50.067378 | orchestrator | 2026-04-17 06:44:50.067391 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-17 06:44:50.067434 | orchestrator | Friday 17 April 2026 06:44:43 +0000 (0:00:04.653) 0:04:09.945 ********** 2026-04-17 06:44:50.067467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:50.067488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:50.067510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:50.067522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:50.067544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:54.790442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:44:54.790590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:54.790607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:54.790618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:44:54.790629 | orchestrator | 2026-04-17 06:44:54.790641 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-17 06:44:54.790652 | orchestrator | Friday 17 April 2026 06:44:54 +0000 (0:00:10.608) 0:04:20.553 ********** 2026-04-17 06:44:54.790680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:54.790707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:54.790718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:44:54.790729 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:44:54.790740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:54.790752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:44:54.790775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:45:13.563044 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:45:13.563202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:45:13.563238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:45:13.563261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:45:13.563283 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:45:13.563303 | orchestrator | 2026-04-17 06:45:13.563325 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-17 06:45:13.563345 | orchestrator | Friday 17 April 2026 06:44:56 +0000 (0:00:01.859) 0:04:22.413 ********** 2026-04-17 06:45:13.563363 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:45:13.563382 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:45:13.563401 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:45:13.563419 | orchestrator | 2026-04-17 06:45:13.563463 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-17 06:45:13.563483 | orchestrator | Friday 17 April 2026 06:44:57 +0000 (0:00:01.726) 0:04:24.139 ********** 2026-04-17 06:45:13.563498 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:45:13.563509 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:45:13.563520 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:45:13.563531 | orchestrator | 2026-04-17 06:45:13.563544 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-17 06:45:13.563557 | orchestrator | Friday 17 April 2026 06:44:59 +0000 (0:00:02.114) 0:04:26.254 ********** 2026-04-17 06:45:13.563570 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-17 06:45:13.563582 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-17 06:45:13.563594 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:45:13.563607 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-17 06:45:13.563619 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-17 06:45:13.563631 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:45:13.563644 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-17 06:45:13.563656 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-17 06:45:13.563682 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:45:13.563695 | orchestrator | 2026-04-17 06:45:13.563725 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-17 06:45:13.563738 | orchestrator | Friday 17 April 2026 06:45:01 +0000 (0:00:01.789) 0:04:28.044 ********** 2026-04-17 06:45:13.563751 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-17 06:45:13.563765 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-17 06:45:13.563778 | orchestrator | 2026-04-17 06:45:13.563790 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-17 06:45:13.563803 | orchestrator | Friday 17 April 2026 06:45:04 +0000 (0:00:02.676) 0:04:30.720 ********** 2026-04-17 06:45:13.563815 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:45:13.563828 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:45:13.563840 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:45:13.563852 | orchestrator | 2026-04-17 06:45:13.563864 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-17 06:45:13.563900 | orchestrator | Friday 17 April 2026 06:45:08 +0000 (0:00:03.740) 0:04:34.461 ********** 2026-04-17 06:45:13.563913 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:45:13.563924 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:45:13.563934 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:45:13.563945 | orchestrator | 2026-04-17 06:45:13.563956 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-17 06:45:13.563966 | orchestrator | Friday 17 April 2026 06:45:11 +0000 (0:00:03.480) 0:04:37.942 ********** 2026-04-17 06:45:13.563979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:45:13.564001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:45:13.564029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:45:18.186614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:45:18.186723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:45:18.186763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:45:18.186791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:45:18.186822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:45:18.186833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:45:18.186844 | orchestrator | 2026-04-17 06:45:18.186855 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-17 06:45:18.186867 | orchestrator | Friday 17 April 2026 06:45:16 +0000 (0:00:04.673) 0:04:42.616 ********** 2026-04-17 06:45:18.186934 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 06:45:18.186946 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:45:18.186956 | orchestrator | } 2026-04-17 06:45:18.186966 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 06:45:18.186975 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:45:18.186993 | orchestrator | } 2026-04-17 06:45:18.187003 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 06:45:18.187012 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:45:18.187022 | orchestrator | } 2026-04-17 06:45:18.187032 | orchestrator | 2026-04-17 06:45:18.187042 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 06:45:18.187051 | orchestrator | Friday 17 April 2026 06:45:17 +0000 (0:00:01.466) 0:04:44.082 ********** 2026-04-17 06:45:18.187062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:45:18.187074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:45:18.187098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:46:50.433591 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:46:50.433706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:46:50.433748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:46:50.433761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:46:50.433772 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:46:50.433798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:46:50.433927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:46:50.433951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:46:50.433961 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:46:50.433971 | orchestrator | 2026-04-17 06:46:50.433982 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 06:46:50.433994 | orchestrator | Friday 17 April 2026 06:45:19 +0000 (0:00:02.155) 0:04:46.238 ********** 2026-04-17 06:46:50.434004 | orchestrator | 2026-04-17 06:46:50.434014 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 06:46:50.434079 | orchestrator | Friday 17 April 2026 06:45:20 +0000 (0:00:00.837) 0:04:47.075 ********** 2026-04-17 06:46:50.434090 | orchestrator | 2026-04-17 06:46:50.434099 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 06:46:50.434109 | orchestrator | Friday 17 April 2026 06:45:21 +0000 (0:00:00.537) 0:04:47.612 ********** 2026-04-17 06:46:50.434118 | orchestrator | 2026-04-17 06:46:50.434129 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-17 06:46:50.434140 | orchestrator | Friday 17 April 2026 06:45:22 +0000 (0:00:00.935) 0:04:48.548 ********** 2026-04-17 06:46:50.434151 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:46:50.434162 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:46:50.434173 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:46:50.434184 | orchestrator | 2026-04-17 06:46:50.434195 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-17 06:46:50.434206 | orchestrator | Friday 17 April 2026 06:45:49 +0000 (0:00:27.200) 0:05:15.749 ********** 2026-04-17 06:46:50.434217 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:46:50.434227 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:46:50.434238 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:46:50.434249 | orchestrator | 2026-04-17 06:46:50.434260 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-17 06:46:50.434271 | orchestrator | Friday 17 April 2026 06:46:03 +0000 (0:00:14.048) 0:05:29.798 ********** 2026-04-17 06:46:50.434281 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:46:50.434292 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:46:50.434303 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:46:50.434314 | orchestrator | 2026-04-17 06:46:50.434325 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-17 06:46:50.434336 | orchestrator | 2026-04-17 06:46:50.434347 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 06:46:50.434358 | orchestrator | Friday 17 April 2026 06:46:09 +0000 (0:00:06.323) 0:05:36.121 ********** 2026-04-17 06:46:50.434369 | orchestrator | included: /ansible/roles/nova-cell/tasks/reconfigure.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:46:50.434381 | orchestrator | 2026-04-17 06:46:50.434391 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 06:46:50.434402 | orchestrator | Friday 17 April 2026 06:46:12 +0000 (0:00:02.502) 0:05:38.623 ********** 2026-04-17 06:46:50.434413 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:46:50.434424 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:46:50.434448 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:46:50.434459 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:46:50.434469 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:46:50.434478 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:46:50.434488 | orchestrator | 2026-04-17 06:46:50.434497 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-17 06:46:50.434507 | orchestrator | Friday 17 April 2026 06:46:14 +0000 (0:00:02.169) 0:05:40.793 ********** 2026-04-17 06:46:50.434516 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:46:50.434526 | orchestrator | 2026-04-17 06:46:50.434535 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-17 06:46:50.434545 | orchestrator | Friday 17 April 2026 06:46:48 +0000 (0:00:34.438) 0:06:15.232 ********** 2026-04-17 06:46:50.434554 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:46:50.434565 | orchestrator | 2026-04-17 06:46:50.434581 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-17 06:47:45.871820 | orchestrator | Friday 17 April 2026 06:46:51 +0000 (0:00:02.552) 0:06:17.784 ********** 2026-04-17 06:47:45.871941 | orchestrator | included: service-image-info for testbed-node-3 2026-04-17 06:47:45.871958 | orchestrator | 2026-04-17 06:47:45.871971 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-17 06:47:45.871983 | orchestrator | Friday 17 April 2026 06:46:53 +0000 (0:00:02.138) 0:06:19.923 ********** 2026-04-17 06:47:45.871994 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:47:45.872005 | orchestrator | 2026-04-17 06:47:45.872017 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-17 06:47:45.872028 | orchestrator | Friday 17 April 2026 06:46:58 +0000 (0:00:04.549) 0:06:24.473 ********** 2026-04-17 06:47:45.872039 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:47:45.872049 | orchestrator | 2026-04-17 06:47:45.872060 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-17 06:47:45.872087 | orchestrator | Friday 17 April 2026 06:47:01 +0000 (0:00:03.219) 0:06:27.692 ********** 2026-04-17 06:47:45.872099 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:47:45.872121 | orchestrator | 2026-04-17 06:47:45.872132 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-17 06:47:45.872143 | orchestrator | Friday 17 April 2026 06:47:04 +0000 (0:00:03.009) 0:06:30.702 ********** 2026-04-17 06:47:45.872154 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:47:45.872165 | orchestrator | 2026-04-17 06:47:45.872176 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-17 06:47:45.872186 | orchestrator | Friday 17 April 2026 06:47:07 +0000 (0:00:03.421) 0:06:34.123 ********** 2026-04-17 06:47:45.872197 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:47:45.872208 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:47:45.872219 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:47:45.872230 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:47:45.872240 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:47:45.872251 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:47:45.872262 | orchestrator | 2026-04-17 06:47:45.872273 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-17 06:47:45.872283 | orchestrator | Friday 17 April 2026 06:47:12 +0000 (0:00:04.816) 0:06:38.939 ********** 2026-04-17 06:47:45.872294 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:47:45.872305 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:47:45.872316 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:47:45.872326 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:47:45.872338 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:47:45.872349 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:47:45.872360 | orchestrator | 2026-04-17 06:47:45.872371 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-17 06:47:45.872382 | orchestrator | Friday 17 April 2026 06:47:20 +0000 (0:00:07.620) 0:06:46.560 ********** 2026-04-17 06:47:45.872392 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:47:45.872403 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:47:45.872439 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:47:45.872450 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 06:47:45.872461 | orchestrator |  "changed": false, 2026-04-17 06:47:45.872472 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-17 06:47:45.872484 | orchestrator | } 2026-04-17 06:47:45.872495 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 06:47:45.872506 | orchestrator |  "changed": false, 2026-04-17 06:47:45.872517 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-17 06:47:45.872527 | orchestrator | } 2026-04-17 06:47:45.872538 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 06:47:45.872548 | orchestrator |  "changed": false, 2026-04-17 06:47:45.872559 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-17 06:47:45.872570 | orchestrator | } 2026-04-17 06:47:45.872580 | orchestrator | 2026-04-17 06:47:45.872591 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-17 06:47:45.872602 | orchestrator | Friday 17 April 2026 06:47:27 +0000 (0:00:07.665) 0:06:54.226 ********** 2026-04-17 06:47:45.872612 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:47:45.872623 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:47:45.872633 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:47:45.872644 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:47:45.872655 | orchestrator | 2026-04-17 06:47:45.872665 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 06:47:45.872676 | orchestrator | Friday 17 April 2026 06:47:30 +0000 (0:00:02.446) 0:06:56.672 ********** 2026-04-17 06:47:45.872687 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-17 06:47:45.872698 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-17 06:47:45.872709 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-17 06:47:45.872719 | orchestrator | 2026-04-17 06:47:45.872730 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 06:47:45.872741 | orchestrator | Friday 17 April 2026 06:47:32 +0000 (0:00:01.761) 0:06:58.434 ********** 2026-04-17 06:47:45.872751 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-17 06:47:45.872762 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-17 06:47:45.872807 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-17 06:47:45.872818 | orchestrator | 2026-04-17 06:47:45.872829 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 06:47:45.872840 | orchestrator | Friday 17 April 2026 06:47:34 +0000 (0:00:02.508) 0:07:00.943 ********** 2026-04-17 06:47:45.872850 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-17 06:47:45.872861 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:47:45.872872 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-17 06:47:45.872882 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:47:45.872893 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-17 06:47:45.872904 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:47:45.872915 | orchestrator | 2026-04-17 06:47:45.872925 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-17 06:47:45.872954 | orchestrator | Friday 17 April 2026 06:47:35 +0000 (0:00:01.403) 0:07:02.346 ********** 2026-04-17 06:47:45.872965 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 06:47:45.872976 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 06:47:45.872987 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 06:47:45.872998 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 06:47:45.873008 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 06:47:45.873019 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 06:47:45.873037 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 06:47:45.873048 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:47:45.873059 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 06:47:45.873069 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 06:47:45.873080 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 06:47:45.873090 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:47:45.873101 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 06:47:45.873112 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 06:47:45.873122 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:47:45.873133 | orchestrator | 2026-04-17 06:47:45.873157 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-17 06:47:45.873168 | orchestrator | Friday 17 April 2026 06:47:38 +0000 (0:00:02.525) 0:07:04.872 ********** 2026-04-17 06:47:45.873178 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:47:45.873189 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:47:45.873199 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:47:45.873210 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:47:45.873221 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:47:45.873231 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:47:45.873242 | orchestrator | 2026-04-17 06:47:45.873253 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-17 06:47:45.873263 | orchestrator | Friday 17 April 2026 06:47:40 +0000 (0:00:02.207) 0:07:07.080 ********** 2026-04-17 06:47:45.873274 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:47:45.873285 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:47:45.873295 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:47:45.873306 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:47:45.873317 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:47:45.873327 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:47:45.873338 | orchestrator | 2026-04-17 06:47:45.873349 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-17 06:47:45.873360 | orchestrator | Friday 17 April 2026 06:47:44 +0000 (0:00:03.417) 0:07:10.498 ********** 2026-04-17 06:47:45.873373 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:47:45.873394 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:47:45.873421 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923027 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923154 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923185 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923216 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923274 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923288 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923300 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923311 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923323 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923339 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:46.923365 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:53.922670 | orchestrator | 2026-04-17 06:47:53.922849 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 06:47:53.922870 | orchestrator | Friday 17 April 2026 06:47:48 +0000 (0:00:03.915) 0:07:14.413 ********** 2026-04-17 06:47:53.922883 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:47:53.922895 | orchestrator | 2026-04-17 06:47:53.922907 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-17 06:47:53.922918 | orchestrator | Friday 17 April 2026 06:47:50 +0000 (0:00:02.463) 0:07:16.877 ********** 2026-04-17 06:47:53.922931 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:47:53.922946 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:47:53.922978 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:47:53.923029 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:47:53.923076 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:47:53.923098 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:47:53.923122 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:47:53.923136 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:47:53.923148 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:47:53.923175 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:53.923196 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:57.232301 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:57.232410 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:57.232426 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:57.232439 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:47:57.232476 | orchestrator | 2026-04-17 06:47:57.232490 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-17 06:47:57.232502 | orchestrator | Friday 17 April 2026 06:47:55 +0000 (0:00:04.956) 0:07:21.834 ********** 2026-04-17 06:47:57.232529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:47:57.232560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:47:57.232573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:47:57.232585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:47:57.232597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:47:57.232621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:47:57.232632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:47:57.232644 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:47:57.232663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:48:00.363336 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:48:00.363454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:48:00.363475 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:48:00.363489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:48:00.363525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:48:00.363553 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:48:00.363566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:48:00.363579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:48:00.363591 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:48:00.363623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:48:00.363636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:48:00.363655 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:48:00.363667 | orchestrator | 2026-04-17 06:48:00.363679 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-17 06:48:00.363691 | orchestrator | Friday 17 April 2026 06:47:59 +0000 (0:00:03.775) 0:07:25.609 ********** 2026-04-17 06:48:00.363704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:48:00.363723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:48:00.363735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:48:00.363790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:48:03.858481 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:48:03.858570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:48:03.858604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:48:03.858613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:48:03.858632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:48:03.858640 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:48:03.858647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:48:03.858653 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:48:03.858673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:48:03.858685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:48:03.858692 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:48:03.858698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:48:03.858709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:48:03.858716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:48:03.858722 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:48:03.858728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:48:03.858735 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:48:03.858799 | orchestrator | 2026-04-17 06:48:03.858814 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 06:48:03.858826 | orchestrator | Friday 17 April 2026 06:48:02 +0000 (0:00:03.623) 0:07:29.233 ********** 2026-04-17 06:48:03.858836 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:48:03.858847 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:48:03.858857 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:48:03.858877 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:48:03.858887 | orchestrator | 2026-04-17 06:48:03.858905 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-17 06:48:54.679902 | orchestrator | Friday 17 April 2026 06:48:05 +0000 (0:00:02.178) 0:07:31.412 ********** 2026-04-17 06:48:54.680000 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 06:48:54.680012 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 06:48:54.680046 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 06:48:54.680054 | orchestrator | 2026-04-17 06:48:54.680062 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-17 06:48:54.680070 | orchestrator | Friday 17 April 2026 06:48:07 +0000 (0:00:02.370) 0:07:33.782 ********** 2026-04-17 06:48:54.680077 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 06:48:54.680084 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 06:48:54.680092 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 06:48:54.680100 | orchestrator | 2026-04-17 06:48:54.680107 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-17 06:48:54.680114 | orchestrator | Friday 17 April 2026 06:48:09 +0000 (0:00:02.077) 0:07:35.860 ********** 2026-04-17 06:48:54.680121 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:48:54.680129 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:48:54.680137 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:48:54.680144 | orchestrator | 2026-04-17 06:48:54.680152 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-17 06:48:54.680159 | orchestrator | Friday 17 April 2026 06:48:11 +0000 (0:00:01.709) 0:07:37.569 ********** 2026-04-17 06:48:54.680166 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:48:54.680174 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:48:54.680180 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:48:54.680188 | orchestrator | 2026-04-17 06:48:54.680195 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-17 06:48:54.680202 | orchestrator | Friday 17 April 2026 06:48:13 +0000 (0:00:01.930) 0:07:39.500 ********** 2026-04-17 06:48:54.680210 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-17 06:48:54.680217 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-17 06:48:54.680224 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-17 06:48:54.680232 | orchestrator | 2026-04-17 06:48:54.680239 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-17 06:48:54.680246 | orchestrator | Friday 17 April 2026 06:48:15 +0000 (0:00:02.495) 0:07:41.995 ********** 2026-04-17 06:48:54.680252 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-17 06:48:54.680260 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-17 06:48:54.680267 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-17 06:48:54.680275 | orchestrator | 2026-04-17 06:48:54.680282 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-17 06:48:54.680289 | orchestrator | Friday 17 April 2026 06:48:17 +0000 (0:00:02.234) 0:07:44.230 ********** 2026-04-17 06:48:54.680296 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-17 06:48:54.680303 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-17 06:48:54.680311 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-17 06:48:54.680318 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-17 06:48:54.680326 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-17 06:48:54.680333 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-17 06:48:54.680340 | orchestrator | 2026-04-17 06:48:54.680363 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-17 06:48:54.680370 | orchestrator | Friday 17 April 2026 06:48:23 +0000 (0:00:05.185) 0:07:49.415 ********** 2026-04-17 06:48:54.680377 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:48:54.680403 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:48:54.680410 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:48:54.680418 | orchestrator | 2026-04-17 06:48:54.680425 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-17 06:48:54.680432 | orchestrator | Friday 17 April 2026 06:48:24 +0000 (0:00:01.568) 0:07:50.983 ********** 2026-04-17 06:48:54.680439 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:48:54.680446 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:48:54.680454 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:48:54.680462 | orchestrator | 2026-04-17 06:48:54.680471 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-17 06:48:54.680479 | orchestrator | Friday 17 April 2026 06:48:26 +0000 (0:00:01.409) 0:07:52.393 ********** 2026-04-17 06:48:54.680486 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:48:54.680494 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:48:54.680502 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:48:54.680509 | orchestrator | 2026-04-17 06:48:54.680517 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-17 06:48:54.680525 | orchestrator | Friday 17 April 2026 06:48:28 +0000 (0:00:02.653) 0:07:55.046 ********** 2026-04-17 06:48:54.680533 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-17 06:48:54.680542 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-17 06:48:54.680549 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-17 06:48:54.680557 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-17 06:48:54.680579 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-17 06:48:54.680586 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-17 06:48:54.680594 | orchestrator | 2026-04-17 06:48:54.680602 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-17 06:48:54.680609 | orchestrator | Friday 17 April 2026 06:48:33 +0000 (0:00:04.875) 0:07:59.922 ********** 2026-04-17 06:48:54.680616 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-17 06:48:54.680623 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-17 06:48:54.680631 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-17 06:48:54.680638 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-17 06:48:54.680645 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:48:54.680652 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-17 06:48:54.680660 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:48:54.680667 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-17 06:48:54.680675 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:48:54.680712 | orchestrator | 2026-04-17 06:48:54.680719 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-17 06:48:54.680726 | orchestrator | Friday 17 April 2026 06:48:38 +0000 (0:00:04.596) 0:08:04.518 ********** 2026-04-17 06:48:54.680733 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:48:54.680739 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:48:54.680747 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:48:54.680754 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-17 06:48:54.680765 | orchestrator | 2026-04-17 06:48:54.680772 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-17 06:48:54.680778 | orchestrator | Friday 17 April 2026 06:48:41 +0000 (0:00:03.665) 0:08:08.183 ********** 2026-04-17 06:48:54.680785 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 06:48:54.680791 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 06:48:54.680798 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 06:48:54.680804 | orchestrator | 2026-04-17 06:48:54.680811 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-17 06:48:54.680817 | orchestrator | Friday 17 April 2026 06:48:44 +0000 (0:00:02.209) 0:08:10.393 ********** 2026-04-17 06:48:54.680824 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:48:54.680830 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:48:54.680837 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:48:54.680844 | orchestrator | 2026-04-17 06:48:54.680850 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-17 06:48:54.680856 | orchestrator | Friday 17 April 2026 06:48:45 +0000 (0:00:01.516) 0:08:11.910 ********** 2026-04-17 06:48:54.680863 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:48:54.680870 | orchestrator | 2026-04-17 06:48:54.680877 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-17 06:48:54.680884 | orchestrator | Friday 17 April 2026 06:48:46 +0000 (0:00:01.129) 0:08:13.040 ********** 2026-04-17 06:48:54.680890 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:48:54.680901 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:48:54.680908 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:48:54.680915 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:48:54.680921 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:48:54.680928 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:48:54.680936 | orchestrator | 2026-04-17 06:48:54.680942 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-17 06:48:54.680949 | orchestrator | Friday 17 April 2026 06:48:48 +0000 (0:00:02.219) 0:08:15.259 ********** 2026-04-17 06:48:54.680955 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 06:48:54.680962 | orchestrator | 2026-04-17 06:48:54.680968 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-17 06:48:54.680974 | orchestrator | Friday 17 April 2026 06:48:50 +0000 (0:00:01.789) 0:08:17.049 ********** 2026-04-17 06:48:54.680981 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:48:54.680988 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:48:54.680994 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:48:54.681001 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:48:54.681007 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:48:54.681014 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:48:54.681020 | orchestrator | 2026-04-17 06:48:54.681027 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-17 06:48:54.681033 | orchestrator | Friday 17 April 2026 06:48:52 +0000 (0:00:01.841) 0:08:18.891 ********** 2026-04-17 06:48:54.681043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:48:54.681062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:48:56.661382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:03.159218 | orchestrator | 2026-04-17 06:49:03.159329 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-17 06:49:03.159346 | orchestrator | Friday 17 April 2026 06:48:57 +0000 (0:00:05.257) 0:08:24.148 ********** 2026-04-17 06:49:03.159377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:03.159393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:03.159406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:03.159438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:03.159450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:03.159479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:03.159497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:03.159511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:03.159531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:49:03.159544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:49:03.159563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:27.201793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:49:27.201941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:27.201972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:27.202092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:27.202118 | orchestrator | 2026-04-17 06:49:27.202142 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-17 06:49:27.202162 | orchestrator | Friday 17 April 2026 06:49:06 +0000 (0:00:08.873) 0:08:33.023 ********** 2026-04-17 06:49:27.202183 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:49:27.202207 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:49:27.202229 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:49:27.202249 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:27.202274 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:27.202297 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:27.202318 | orchestrator | 2026-04-17 06:49:27.202342 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-17 06:49:27.202364 | orchestrator | Friday 17 April 2026 06:49:09 +0000 (0:00:03.078) 0:08:36.101 ********** 2026-04-17 06:49:27.202387 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 06:49:27.202410 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 06:49:27.202432 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 06:49:27.202452 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 06:49:27.202470 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 06:49:27.202488 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 06:49:27.202508 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:27.202529 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 06:49:27.202549 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 06:49:27.202569 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:27.202587 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 06:49:27.202606 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:27.202653 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 06:49:27.202699 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 06:49:27.202711 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 06:49:27.202721 | orchestrator | 2026-04-17 06:49:27.202732 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-17 06:49:27.202743 | orchestrator | Friday 17 April 2026 06:49:15 +0000 (0:00:05.502) 0:08:41.604 ********** 2026-04-17 06:49:27.202753 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:49:27.202764 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:49:27.202774 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:49:27.202802 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:27.202822 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:27.202833 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:27.202844 | orchestrator | 2026-04-17 06:49:27.202855 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-17 06:49:27.202866 | orchestrator | Friday 17 April 2026 06:49:17 +0000 (0:00:01.877) 0:08:43.481 ********** 2026-04-17 06:49:27.202877 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 06:49:27.202888 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 06:49:27.202898 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 06:49:27.202909 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 06:49:27.202920 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 06:49:27.202931 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 06:49:27.202942 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 06:49:27.202953 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 06:49:27.202963 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 06:49:27.202974 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:27.202984 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 06:49:27.202995 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:27.203006 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 06:49:27.203016 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 06:49:27.203027 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 06:49:27.203037 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 06:49:27.203048 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:27.203058 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 06:49:27.203069 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 06:49:27.203079 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 06:49:27.203090 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 06:49:27.203100 | orchestrator | 2026-04-17 06:49:27.203111 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-17 06:49:27.203122 | orchestrator | Friday 17 April 2026 06:49:23 +0000 (0:00:06.759) 0:08:50.240 ********** 2026-04-17 06:49:27.203132 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 06:49:27.203143 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 06:49:27.203154 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 06:49:27.203165 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 06:49:27.203175 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 06:49:27.203193 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 06:49:27.203203 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 06:49:27.203214 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 06:49:27.203225 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 06:49:27.203235 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 06:49:27.203255 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 06:49:44.096753 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 06:49:44.096851 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 06:49:44.096864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 06:49:44.096873 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:44.096882 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 06:49:44.096903 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 06:49:44.096911 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:44.096919 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 06:49:44.096926 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:44.096934 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 06:49:44.096941 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 06:49:44.096949 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 06:49:44.096956 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 06:49:44.096964 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 06:49:44.096971 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 06:49:44.096978 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 06:49:44.096985 | orchestrator | 2026-04-17 06:49:44.096993 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-17 06:49:44.097000 | orchestrator | Friday 17 April 2026 06:49:32 +0000 (0:00:08.251) 0:08:58.492 ********** 2026-04-17 06:49:44.097008 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:49:44.097015 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:49:44.097022 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:49:44.097030 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:44.097037 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:44.097044 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:44.097051 | orchestrator | 2026-04-17 06:49:44.097059 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-17 06:49:44.097066 | orchestrator | Friday 17 April 2026 06:49:34 +0000 (0:00:02.066) 0:09:00.558 ********** 2026-04-17 06:49:44.097073 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:49:44.097080 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:49:44.097088 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:49:44.097095 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:44.097102 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:44.097109 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:44.097117 | orchestrator | 2026-04-17 06:49:44.097124 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-17 06:49:44.097131 | orchestrator | Friday 17 April 2026 06:49:36 +0000 (0:00:01.843) 0:09:02.402 ********** 2026-04-17 06:49:44.097139 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:44.097162 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:44.097169 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:44.097177 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:49:44.097185 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:49:44.097192 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:49:44.097199 | orchestrator | 2026-04-17 06:49:44.097206 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-17 06:49:44.097214 | orchestrator | Friday 17 April 2026 06:49:39 +0000 (0:00:03.328) 0:09:05.731 ********** 2026-04-17 06:49:44.097221 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:44.097228 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:49:44.097235 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:44.097242 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:49:44.097249 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:49:44.097256 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:44.097264 | orchestrator | 2026-04-17 06:49:44.097271 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-17 06:49:44.097278 | orchestrator | Friday 17 April 2026 06:49:42 +0000 (0:00:03.252) 0:09:08.983 ********** 2026-04-17 06:49:44.097289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:44.097314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:44.097328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:49:44.097337 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:49:44.097345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:44.097358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:44.097366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:49:44.097374 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:49:44.097387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:49.757481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:49.757589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:49:49.757629 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:49:49.757705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:49:49.757721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:49:49.757733 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:49.757745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:49:49.757756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:49:49.757768 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:49.757805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:49:49.757818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:49:49.757838 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:49.757850 | orchestrator | 2026-04-17 06:49:49.757862 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-17 06:49:49.757874 | orchestrator | Friday 17 April 2026 06:49:45 +0000 (0:00:03.314) 0:09:12.298 ********** 2026-04-17 06:49:49.757885 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-17 06:49:49.757896 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-17 06:49:49.757907 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:49:49.757918 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-17 06:49:49.757929 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-17 06:49:49.757939 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:49:49.757950 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-17 06:49:49.757961 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-17 06:49:49.757972 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:49:49.757983 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-17 06:49:49.757994 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-17 06:49:49.758007 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:49:49.758078 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-17 06:49:49.758092 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-17 06:49:49.758105 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:49:49.758118 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-17 06:49:49.758130 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-17 06:49:49.758142 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:49:49.758154 | orchestrator | 2026-04-17 06:49:49.758167 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-17 06:49:49.758181 | orchestrator | Friday 17 April 2026 06:49:47 +0000 (0:00:01.973) 0:09:14.272 ********** 2026-04-17 06:49:49.758193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:49:49.758220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:51.334444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:55.795163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:49:55.795239 | orchestrator | 2026-04-17 06:49:55.795246 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-17 06:49:55.795252 | orchestrator | Friday 17 April 2026 06:49:52 +0000 (0:00:04.624) 0:09:18.896 ********** 2026-04-17 06:49:55.795257 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 06:49:55.795262 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:49:55.795266 | orchestrator | } 2026-04-17 06:49:55.795271 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 06:49:55.795275 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:49:55.795279 | orchestrator | } 2026-04-17 06:49:55.795283 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 06:49:55.795287 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:49:55.795291 | orchestrator | } 2026-04-17 06:49:55.795295 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 06:49:55.795299 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:49:55.795303 | orchestrator | } 2026-04-17 06:49:55.795307 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 06:49:55.795311 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:49:55.795315 | orchestrator | } 2026-04-17 06:49:55.795319 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 06:49:55.795323 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:49:55.795327 | orchestrator | } 2026-04-17 06:49:55.795331 | orchestrator | 2026-04-17 06:49:55.795335 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 06:49:55.795339 | orchestrator | Friday 17 April 2026 06:49:54 +0000 (0:00:02.111) 0:09:21.007 ********** 2026-04-17 06:49:55.795344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:55.795351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:55.795370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:49:55.795389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:55.795394 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:49:55.795399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:55.795403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:49:55.795407 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:49:55.795411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:49:55.795419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:49:55.795429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:52:44.016162 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:52:44.016282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:52:44.016303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:52:44.016316 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:52:44.016328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:52:44.016361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:52:44.016373 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:52:44.016384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:52:44.016409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:52:44.016420 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:52:44.016432 | orchestrator | 2026-04-17 06:52:44.016444 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 06:52:44.016472 | orchestrator | Friday 17 April 2026 06:49:58 +0000 (0:00:03.463) 0:09:24.471 ********** 2026-04-17 06:52:44.016484 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:52:44.016495 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:52:44.016506 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:52:44.016516 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:52:44.016576 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:52:44.016588 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:52:44.016598 | orchestrator | 2026-04-17 06:52:44.016610 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 06:52:44.016622 | orchestrator | Friday 17 April 2026 06:49:59 +0000 (0:00:01.887) 0:09:26.359 ********** 2026-04-17 06:52:44.016633 | orchestrator | 2026-04-17 06:52:44.016644 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 06:52:44.016655 | orchestrator | Friday 17 April 2026 06:50:00 +0000 (0:00:00.555) 0:09:26.915 ********** 2026-04-17 06:52:44.016665 | orchestrator | 2026-04-17 06:52:44.016676 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 06:52:44.016687 | orchestrator | Friday 17 April 2026 06:50:01 +0000 (0:00:00.817) 0:09:27.732 ********** 2026-04-17 06:52:44.016697 | orchestrator | 2026-04-17 06:52:44.016708 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 06:52:44.016719 | orchestrator | Friday 17 April 2026 06:50:01 +0000 (0:00:00.606) 0:09:28.339 ********** 2026-04-17 06:52:44.016729 | orchestrator | 2026-04-17 06:52:44.016740 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 06:52:44.016751 | orchestrator | Friday 17 April 2026 06:50:02 +0000 (0:00:00.545) 0:09:28.885 ********** 2026-04-17 06:52:44.016761 | orchestrator | 2026-04-17 06:52:44.016772 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 06:52:44.016791 | orchestrator | Friday 17 April 2026 06:50:03 +0000 (0:00:00.565) 0:09:29.450 ********** 2026-04-17 06:52:44.016802 | orchestrator | 2026-04-17 06:52:44.016813 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-17 06:52:44.016823 | orchestrator | Friday 17 April 2026 06:50:03 +0000 (0:00:00.899) 0:09:30.350 ********** 2026-04-17 06:52:44.016834 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:52:44.016845 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:52:44.016856 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:52:44.016866 | orchestrator | 2026-04-17 06:52:44.016877 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-17 06:52:44.016888 | orchestrator | Friday 17 April 2026 06:50:19 +0000 (0:00:15.190) 0:09:45.540 ********** 2026-04-17 06:52:44.016898 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:52:44.016909 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:52:44.016920 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:52:44.016931 | orchestrator | 2026-04-17 06:52:44.016942 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-17 06:52:44.016952 | orchestrator | Friday 17 April 2026 06:50:41 +0000 (0:00:22.300) 0:10:07.841 ********** 2026-04-17 06:52:44.016963 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:52:44.016974 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:52:44.016984 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:52:44.016995 | orchestrator | 2026-04-17 06:52:44.017006 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-17 06:52:44.017017 | orchestrator | Friday 17 April 2026 06:51:07 +0000 (0:00:25.927) 0:10:33.768 ********** 2026-04-17 06:52:44.017027 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:52:44.017038 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:52:44.017049 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:52:44.017060 | orchestrator | 2026-04-17 06:52:44.017071 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-17 06:52:44.017081 | orchestrator | Friday 17 April 2026 06:51:51 +0000 (0:00:44.208) 0:11:17.976 ********** 2026-04-17 06:52:44.017092 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:52:44.017103 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-17 06:52:44.017115 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-17 06:52:44.017126 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:52:44.017136 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:52:44.017147 | orchestrator | 2026-04-17 06:52:44.017158 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-17 06:52:44.017169 | orchestrator | Friday 17 April 2026 06:51:59 +0000 (0:00:07.558) 0:11:25.534 ********** 2026-04-17 06:52:44.017179 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:52:44.017191 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:52:44.017202 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:52:44.017212 | orchestrator | 2026-04-17 06:52:44.017223 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-17 06:52:44.017234 | orchestrator | Friday 17 April 2026 06:52:01 +0000 (0:00:01.864) 0:11:27.399 ********** 2026-04-17 06:52:44.017244 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:52:44.017255 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:52:44.017266 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:52:44.017277 | orchestrator | 2026-04-17 06:52:44.017287 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-17 06:52:44.017299 | orchestrator | Friday 17 April 2026 06:52:32 +0000 (0:00:31.153) 0:11:58.553 ********** 2026-04-17 06:52:44.017310 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:52:44.017321 | orchestrator | 2026-04-17 06:52:44.017337 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-17 06:52:44.017348 | orchestrator | Friday 17 April 2026 06:52:33 +0000 (0:00:01.488) 0:12:00.041 ********** 2026-04-17 06:52:44.017366 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:52:44.017377 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:52:44.017387 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:52:44.017398 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:52:44.017409 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:52:44.017420 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:52:44.017431 | orchestrator | 2026-04-17 06:52:44.017442 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-17 06:52:44.017462 | orchestrator | Friday 17 April 2026 06:52:44 +0000 (0:00:10.350) 0:12:10.391 ********** 2026-04-17 06:53:48.286724 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:53:48.286846 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:53:48.286862 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:53:48.286873 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:53:48.286884 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:53:48.286895 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:53:48.286907 | orchestrator | 2026-04-17 06:53:48.286919 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-17 06:53:48.286932 | orchestrator | Friday 17 April 2026 06:52:56 +0000 (0:00:12.064) 0:12:22.456 ********** 2026-04-17 06:53:48.286943 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:53:48.286954 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:53:48.286965 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:53:48.286975 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:53:48.286986 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:53:48.286997 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-04-17 06:53:48.287009 | orchestrator | 2026-04-17 06:53:48.287020 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 06:53:48.287031 | orchestrator | Friday 17 April 2026 06:53:01 +0000 (0:00:05.908) 0:12:28.364 ********** 2026-04-17 06:53:48.287042 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:53:48.287054 | orchestrator | 2026-04-17 06:53:48.287064 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 06:53:48.287075 | orchestrator | Friday 17 April 2026 06:53:16 +0000 (0:00:14.258) 0:12:42.623 ********** 2026-04-17 06:53:48.287086 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:53:48.287097 | orchestrator | 2026-04-17 06:53:48.287107 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-17 06:53:48.287118 | orchestrator | Friday 17 April 2026 06:53:19 +0000 (0:00:02.997) 0:12:45.621 ********** 2026-04-17 06:53:48.287129 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:53:48.287140 | orchestrator | 2026-04-17 06:53:48.287165 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-17 06:53:48.287176 | orchestrator | Friday 17 April 2026 06:53:21 +0000 (0:00:02.660) 0:12:48.281 ********** 2026-04-17 06:53:48.287187 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-17 06:53:48.287199 | orchestrator | 2026-04-17 06:53:48.287210 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-17 06:53:48.287221 | orchestrator | 2026-04-17 06:53:48.287246 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-17 06:53:48.287259 | orchestrator | Friday 17 April 2026 06:53:34 +0000 (0:00:12.821) 0:13:01.103 ********** 2026-04-17 06:53:48.287271 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:53:48.287285 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:53:48.287297 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:53:48.287310 | orchestrator | 2026-04-17 06:53:48.287322 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-17 06:53:48.287335 | orchestrator | 2026-04-17 06:53:48.287347 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-17 06:53:48.287382 | orchestrator | Friday 17 April 2026 06:53:36 +0000 (0:00:02.284) 0:13:03.387 ********** 2026-04-17 06:53:48.287395 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:53:48.287407 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:53:48.287419 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:53:48.287432 | orchestrator | 2026-04-17 06:53:48.287444 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-17 06:53:48.287457 | orchestrator | 2026-04-17 06:53:48.287469 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-17 06:53:48.287481 | orchestrator | Friday 17 April 2026 06:53:39 +0000 (0:00:02.218) 0:13:05.605 ********** 2026-04-17 06:53:48.287515 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-17 06:53:48.287529 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-17 06:53:48.287541 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-17 06:53:48.287553 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-17 06:53:48.287564 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-17 06:53:48.287575 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-17 06:53:48.287585 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:53:48.287596 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-17 06:53:48.287607 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-17 06:53:48.287618 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-17 06:53:48.287629 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-17 06:53:48.287639 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-17 06:53:48.287650 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-17 06:53:48.287660 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:53:48.287674 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-17 06:53:48.287693 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-17 06:53:48.287718 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-17 06:53:48.287729 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-17 06:53:48.287740 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-17 06:53:48.287750 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-17 06:53:48.287775 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:53:48.287786 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-17 06:53:48.287797 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-17 06:53:48.287807 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-17 06:53:48.287818 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-17 06:53:48.287846 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-17 06:53:48.287857 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-17 06:53:48.287868 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:53:48.287879 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-17 06:53:48.287889 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-17 06:53:48.287900 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-17 06:53:48.287910 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-17 06:53:48.287921 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-17 06:53:48.287931 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-17 06:53:48.287942 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:53:48.287952 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-17 06:53:48.287963 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-17 06:53:48.287973 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-17 06:53:48.287993 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-17 06:53:48.288004 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-17 06:53:48.288014 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-17 06:53:48.288025 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:53:48.288036 | orchestrator | 2026-04-17 06:53:48.288046 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-17 06:53:48.288057 | orchestrator | 2026-04-17 06:53:48.288067 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-17 06:53:48.288078 | orchestrator | Friday 17 April 2026 06:53:41 +0000 (0:00:02.722) 0:13:08.327 ********** 2026-04-17 06:53:48.288089 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-17 06:53:48.288099 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-17 06:53:48.288122 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:53:48.288132 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-17 06:53:48.288143 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-17 06:53:48.288153 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:53:48.288164 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-17 06:53:48.288174 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-17 06:53:48.288185 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:53:48.288196 | orchestrator | 2026-04-17 06:53:48.288206 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-17 06:53:48.288217 | orchestrator | 2026-04-17 06:53:48.288227 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-17 06:53:48.288238 | orchestrator | Friday 17 April 2026 06:53:43 +0000 (0:00:02.028) 0:13:10.356 ********** 2026-04-17 06:53:48.288249 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:53:48.288259 | orchestrator | 2026-04-17 06:53:48.288270 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-17 06:53:48.288281 | orchestrator | 2026-04-17 06:53:48.288292 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-17 06:53:48.288302 | orchestrator | Friday 17 April 2026 06:53:46 +0000 (0:00:02.051) 0:13:12.407 ********** 2026-04-17 06:53:48.288313 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:53:48.288323 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:53:48.288338 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:53:48.288349 | orchestrator | 2026-04-17 06:53:48.288360 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 06:53:48.288371 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 06:53:48.288384 | orchestrator | testbed-node-0 : ok=58  changed=25  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-17 06:53:48.288395 | orchestrator | testbed-node-1 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-17 06:53:48.288406 | orchestrator | testbed-node-2 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-17 06:53:48.288416 | orchestrator | testbed-node-3 : ok=49  changed=15  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 06:53:48.288427 | orchestrator | testbed-node-4 : ok=48  changed=14  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-17 06:53:48.288443 | orchestrator | testbed-node-5 : ok=43  changed=14  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-17 06:53:48.288454 | orchestrator | 2026-04-17 06:53:48.288471 | orchestrator | 2026-04-17 06:53:48.288482 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 06:53:48.288522 | orchestrator | Friday 17 April 2026 06:53:48 +0000 (0:00:02.250) 0:13:14.658 ********** 2026-04-17 06:53:48.288533 | orchestrator | =============================================================================== 2026-04-17 06:53:48.288544 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.21s 2026-04-17 06:53:48.288555 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 34.44s 2026-04-17 06:53:48.288565 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.04s 2026-04-17 06:53:48.288582 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.15s 2026-04-17 06:53:48.792168 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 27.20s 2026-04-17 06:53:48.792268 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.93s 2026-04-17 06:53:48.792284 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 22.30s 2026-04-17 06:53:48.792296 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.55s 2026-04-17 06:53:48.792307 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 15.19s 2026-04-17 06:53:48.792318 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.12s 2026-04-17 06:53:48.792329 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.26s 2026-04-17 06:53:48.792340 | orchestrator | nova : Restart nova-api container -------------------------------------- 14.05s 2026-04-17 06:53:48.792351 | orchestrator | nova-cell : Update cell ------------------------------------------------ 13.59s 2026-04-17 06:53:48.792361 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.45s 2026-04-17 06:53:48.792372 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 12.83s 2026-04-17 06:53:48.792382 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.82s 2026-04-17 06:53:48.792393 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 12.30s 2026-04-17 06:53:48.792403 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.07s 2026-04-17 06:53:48.792414 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.61s 2026-04-17 06:53:48.792425 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 10.35s 2026-04-17 06:53:48.991729 | orchestrator | + osism apply nova-update-cell-mappings 2026-04-17 06:54:00.325028 | orchestrator | 2026-04-17 06:54:00 | INFO  | Prepare task for execution of nova-update-cell-mappings. 2026-04-17 06:54:00.430284 | orchestrator | 2026-04-17 06:54:00 | INFO  | Task 1188eb17-ce2d-4a8e-b583-7132ab7a8178 (nova-update-cell-mappings) was prepared for execution. 2026-04-17 06:54:00.430400 | orchestrator | 2026-04-17 06:54:00 | INFO  | It takes a moment until task 1188eb17-ce2d-4a8e-b583-7132ab7a8178 (nova-update-cell-mappings) has been started and output is visible here. 2026-04-17 06:54:31.951049 | orchestrator | 2026-04-17 06:54:31.951138 | orchestrator | PLAY [Update Nova cell mappings] *********************************************** 2026-04-17 06:54:31.951147 | orchestrator | 2026-04-17 06:54:31.951153 | orchestrator | TASK [Get list of Nova cells] ************************************************** 2026-04-17 06:54:31.951159 | orchestrator | Friday 17 April 2026 06:54:05 +0000 (0:00:01.658) 0:00:01.658 ********** 2026-04-17 06:54:31.951164 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:54:31.951170 | orchestrator | 2026-04-17 06:54:31.951175 | orchestrator | TASK [Parse cell information] ************************************************** 2026-04-17 06:54:31.951180 | orchestrator | Friday 17 April 2026 06:54:19 +0000 (0:00:14.366) 0:00:16.024 ********** 2026-04-17 06:54:31.951185 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:54:31.951190 | orchestrator | 2026-04-17 06:54:31.951195 | orchestrator | TASK [Display cells to update] ************************************************* 2026-04-17 06:54:31.951200 | orchestrator | Friday 17 April 2026 06:54:21 +0000 (0:00:01.206) 0:00:17.231 ********** 2026-04-17 06:54:31.951222 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 06:54:31.951228 | orchestrator |  "msg": "Cells to update: [{'name': '', 'uuid': '99db51b6-27fa-43d6-8db3-8bd81c818d64'}]" 2026-04-17 06:54:31.951235 | orchestrator | } 2026-04-17 06:54:31.951240 | orchestrator | 2026-04-17 06:54:31.951245 | orchestrator | TASK [Use specified cell UUID if provided] ************************************* 2026-04-17 06:54:31.951250 | orchestrator | Friday 17 April 2026 06:54:22 +0000 (0:00:01.158) 0:00:18.390 ********** 2026-04-17 06:54:31.951264 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:54:31.951269 | orchestrator | 2026-04-17 06:54:31.951273 | orchestrator | TASK [Abort if multiple cells found without specific UUID and abort_on_multiple is enabled] *** 2026-04-17 06:54:31.951279 | orchestrator | Friday 17 April 2026 06:54:23 +0000 (0:00:01.101) 0:00:19.491 ********** 2026-04-17 06:54:31.951284 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:54:31.951288 | orchestrator | 2026-04-17 06:54:31.951293 | orchestrator | TASK [Update Nova cell mappings] *********************************************** 2026-04-17 06:54:31.951298 | orchestrator | Friday 17 April 2026 06:54:24 +0000 (0:00:01.089) 0:00:20.581 ********** 2026-04-17 06:54:31.951303 | orchestrator | changed: [testbed-node-0] => (item=99db51b6-27fa-43d6-8db3-8bd81c818d64) 2026-04-17 06:54:31.951307 | orchestrator | 2026-04-17 06:54:31.951312 | orchestrator | TASK [Display update results] ************************************************** 2026-04-17 06:54:31.951317 | orchestrator | Friday 17 April 2026 06:54:29 +0000 (0:00:05.498) 0:00:26.080 ********** 2026-04-17 06:54:31.951332 | orchestrator | ok: [testbed-node-0] => (item=99db51b6-27fa-43d6-8db3-8bd81c818d64) => { 2026-04-17 06:54:31.951337 | orchestrator |  "msg": "Cell 99db51b6-27fa-43d6-8db3-8bd81c818d64 updated successfully" 2026-04-17 06:54:31.951342 | orchestrator | } 2026-04-17 06:54:31.951347 | orchestrator | 2026-04-17 06:54:31.951352 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 06:54:31.951357 | orchestrator | testbed-node-0 : ok=5  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 06:54:31.951363 | orchestrator | 2026-04-17 06:54:31.951368 | orchestrator | 2026-04-17 06:54:31.951373 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 06:54:31.951378 | orchestrator | Friday 17 April 2026 06:54:31 +0000 (0:00:01.676) 0:00:27.756 ********** 2026-04-17 06:54:31.951382 | orchestrator | =============================================================================== 2026-04-17 06:54:31.951387 | orchestrator | Get list of Nova cells ------------------------------------------------- 14.37s 2026-04-17 06:54:31.951392 | orchestrator | Update Nova cell mappings ----------------------------------------------- 5.50s 2026-04-17 06:54:31.951397 | orchestrator | Display update results -------------------------------------------------- 1.68s 2026-04-17 06:54:31.951401 | orchestrator | Parse cell information -------------------------------------------------- 1.21s 2026-04-17 06:54:31.951406 | orchestrator | Display cells to update ------------------------------------------------- 1.16s 2026-04-17 06:54:31.951411 | orchestrator | Use specified cell UUID if provided ------------------------------------- 1.10s 2026-04-17 06:54:31.951416 | orchestrator | Abort if multiple cells found without specific UUID and abort_on_multiple is enabled --- 1.09s 2026-04-17 06:54:32.184150 | orchestrator | + osism apply -a upgrade nova 2026-04-17 06:54:33.514914 | orchestrator | 2026-04-17 06:54:33 | INFO  | Prepare task for execution of nova. 2026-04-17 06:54:33.581345 | orchestrator | 2026-04-17 06:54:33 | INFO  | Task f595aaf7-6ee5-42e9-95e8-dbab4562fa54 (nova) was prepared for execution. 2026-04-17 06:54:33.581444 | orchestrator | 2026-04-17 06:54:33 | INFO  | It takes a moment until task f595aaf7-6ee5-42e9-95e8-dbab4562fa54 (nova) has been started and output is visible here. 2026-04-17 06:55:46.492324 | orchestrator | 2026-04-17 06:55:46.492506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 06:55:46.492527 | orchestrator | 2026-04-17 06:55:46.492539 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-17 06:55:46.492573 | orchestrator | Friday 17 April 2026 06:54:38 +0000 (0:00:01.558) 0:00:01.558 ********** 2026-04-17 06:55:46.492585 | orchestrator | changed: [testbed-manager] 2026-04-17 06:55:46.492597 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:55:46.492608 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:55:46.492619 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:55:46.492630 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:55:46.492640 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:55:46.492651 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:55:46.492661 | orchestrator | 2026-04-17 06:55:46.492672 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 06:55:46.492684 | orchestrator | Friday 17 April 2026 06:54:42 +0000 (0:00:03.826) 0:00:05.385 ********** 2026-04-17 06:55:46.492695 | orchestrator | changed: [testbed-manager] 2026-04-17 06:55:46.492706 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:55:46.492716 | orchestrator | changed: [testbed-node-1] 2026-04-17 06:55:46.492727 | orchestrator | changed: [testbed-node-2] 2026-04-17 06:55:46.492738 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:55:46.492748 | orchestrator | changed: [testbed-node-4] 2026-04-17 06:55:46.492759 | orchestrator | changed: [testbed-node-5] 2026-04-17 06:55:46.492769 | orchestrator | 2026-04-17 06:55:46.492780 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 06:55:46.492790 | orchestrator | Friday 17 April 2026 06:54:44 +0000 (0:00:02.120) 0:00:07.506 ********** 2026-04-17 06:55:46.492801 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-17 06:55:46.492813 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-17 06:55:46.492823 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-17 06:55:46.492834 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-17 06:55:46.492844 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-17 06:55:46.492855 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-17 06:55:46.492867 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-17 06:55:46.492880 | orchestrator | 2026-04-17 06:55:46.492892 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-17 06:55:46.492905 | orchestrator | 2026-04-17 06:55:46.492917 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-17 06:55:46.492929 | orchestrator | Friday 17 April 2026 06:54:48 +0000 (0:00:03.401) 0:00:10.908 ********** 2026-04-17 06:55:46.492941 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:55:46.492953 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:55:46.492965 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:55:46.492977 | orchestrator | 2026-04-17 06:55:46.492989 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-17 06:55:46.493001 | orchestrator | Friday 17 April 2026 06:54:50 +0000 (0:00:02.653) 0:00:13.561 ********** 2026-04-17 06:55:46.493013 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:55:46.493026 | orchestrator | 2026-04-17 06:55:46.493038 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-17 06:55:46.493051 | orchestrator | Friday 17 April 2026 06:54:53 +0000 (0:00:02.468) 0:00:16.030 ********** 2026-04-17 06:55:46.493063 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:55:46.493075 | orchestrator | 2026-04-17 06:55:46.493088 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-17 06:55:46.493113 | orchestrator | Friday 17 April 2026 06:54:55 +0000 (0:00:01.941) 0:00:17.972 ********** 2026-04-17 06:55:46.493125 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:55:46.493135 | orchestrator | 2026-04-17 06:55:46.493146 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-17 06:55:46.493156 | orchestrator | Friday 17 April 2026 06:54:57 +0000 (0:00:02.084) 0:00:20.056 ********** 2026-04-17 06:55:46.493175 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:55:46.493186 | orchestrator | 2026-04-17 06:55:46.493196 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-17 06:55:46.493207 | orchestrator | Friday 17 April 2026 06:55:01 +0000 (0:00:04.010) 0:00:24.067 ********** 2026-04-17 06:55:46.493218 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:55:46.493228 | orchestrator | 2026-04-17 06:55:46.493238 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-17 06:55:46.493249 | orchestrator | 2026-04-17 06:55:46.493260 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-17 06:55:46.493270 | orchestrator | Friday 17 April 2026 06:55:20 +0000 (0:00:19.051) 0:00:43.118 ********** 2026-04-17 06:55:46.493281 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:55:46.493291 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:55:46.493302 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:55:46.493313 | orchestrator | 2026-04-17 06:55:46.493323 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-17 06:55:46.493334 | orchestrator | Friday 17 April 2026 06:55:21 +0000 (0:00:01.304) 0:00:44.423 ********** 2026-04-17 06:55:46.493345 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:55:46.493355 | orchestrator | 2026-04-17 06:55:46.493366 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-17 06:55:46.493376 | orchestrator | Friday 17 April 2026 06:55:23 +0000 (0:00:01.794) 0:00:46.218 ********** 2026-04-17 06:55:46.493387 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:55:46.493398 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:55:46.493408 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:55:46.493419 | orchestrator | 2026-04-17 06:55:46.493430 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-17 06:55:46.493461 | orchestrator | Friday 17 April 2026 06:55:24 +0000 (0:00:01.479) 0:00:47.697 ********** 2026-04-17 06:55:46.493472 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:55:46.493483 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:55:46.493494 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:55:46.493505 | orchestrator | 2026-04-17 06:55:46.493533 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-17 06:55:46.493545 | orchestrator | Friday 17 April 2026 06:55:26 +0000 (0:00:01.903) 0:00:49.601 ********** 2026-04-17 06:55:46.493555 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:55:46.493566 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:55:46.493577 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:55:46.493588 | orchestrator | 2026-04-17 06:55:46.493598 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-17 06:55:46.493609 | orchestrator | Friday 17 April 2026 06:55:30 +0000 (0:00:03.603) 0:00:53.205 ********** 2026-04-17 06:55:46.493620 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:55:46.493631 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:55:46.493641 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:55:46.493652 | orchestrator | 2026-04-17 06:55:46.493663 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-17 06:55:46.493674 | orchestrator | 2026-04-17 06:55:46.493684 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 06:55:46.493695 | orchestrator | Friday 17 April 2026 06:55:43 +0000 (0:00:12.966) 0:01:06.172 ********** 2026-04-17 06:55:46.493706 | orchestrator | included: /ansible/roles/nova/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:55:46.493718 | orchestrator | 2026-04-17 06:55:46.493729 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-17 06:55:46.493739 | orchestrator | Friday 17 April 2026 06:55:45 +0000 (0:00:01.983) 0:01:08.155 ********** 2026-04-17 06:55:46.493756 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:55:46.493785 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:55:46.493807 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:55:58.443109 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:55:58.443255 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:55:58.443289 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:55:58.443304 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:55:58.443334 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:55:58.443347 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:55:58.443368 | orchestrator | 2026-04-17 06:55:58.443381 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-17 06:55:58.443393 | orchestrator | Friday 17 April 2026 06:55:48 +0000 (0:00:03.275) 0:01:11.431 ********** 2026-04-17 06:55:58.443404 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:55:58.443416 | orchestrator | 2026-04-17 06:55:58.443494 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-17 06:55:58.443509 | orchestrator | Friday 17 April 2026 06:55:49 +0000 (0:00:01.164) 0:01:12.596 ********** 2026-04-17 06:55:58.443520 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:55:58.443531 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:55:58.443541 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:55:58.443552 | orchestrator | 2026-04-17 06:55:58.443563 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-17 06:55:58.443573 | orchestrator | Friday 17 April 2026 06:55:51 +0000 (0:00:01.607) 0:01:14.203 ********** 2026-04-17 06:55:58.443584 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 06:55:58.443595 | orchestrator | 2026-04-17 06:55:58.443605 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-17 06:55:58.443617 | orchestrator | Friday 17 April 2026 06:55:53 +0000 (0:00:02.156) 0:01:16.360 ********** 2026-04-17 06:55:58.443628 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:55:58.443639 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:55:58.443651 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:55:58.443663 | orchestrator | 2026-04-17 06:55:58.443675 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 06:55:58.443687 | orchestrator | Friday 17 April 2026 06:55:54 +0000 (0:00:01.338) 0:01:17.698 ********** 2026-04-17 06:55:58.443700 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:55:58.443713 | orchestrator | 2026-04-17 06:55:58.443725 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-17 06:55:58.443737 | orchestrator | Friday 17 April 2026 06:55:56 +0000 (0:00:02.128) 0:01:19.827 ********** 2026-04-17 06:55:58.443757 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:55:58.443782 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:01.662804 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:01.662948 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:01.662970 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:01.663005 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:01.663043 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:01.663057 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:01.663074 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:01.663086 | orchestrator | 2026-04-17 06:56:01.663099 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-17 06:56:01.663111 | orchestrator | Friday 17 April 2026 06:56:01 +0000 (0:00:04.243) 0:01:24.070 ********** 2026-04-17 06:56:01.663124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:01.663145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:03.519475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:03.519572 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:56:03.519585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:03.519608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:03.519616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:03.520333 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:56:03.520363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:03.520372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:03.520384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:03.520391 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:56:03.520397 | orchestrator | 2026-04-17 06:56:03.520404 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-17 06:56:03.520412 | orchestrator | Friday 17 April 2026 06:56:03 +0000 (0:00:01.849) 0:01:25.920 ********** 2026-04-17 06:56:03.520418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:03.520467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:06.681247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:06.681352 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:56:06.681388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:06.681404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:06.681492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:06.681506 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:56:06.681539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:06.681558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:06.681571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:06.681592 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:56:06.681603 | orchestrator | 2026-04-17 06:56:06.681615 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-17 06:56:06.681627 | orchestrator | Friday 17 April 2026 06:56:05 +0000 (0:00:02.171) 0:01:28.091 ********** 2026-04-17 06:56:06.681639 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:06.681659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:12.723007 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:12.723109 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:12.723142 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:12.723194 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:12.723208 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:12.723224 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:12.723234 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:12.723250 | orchestrator | 2026-04-17 06:56:12.723260 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-17 06:56:12.723270 | orchestrator | Friday 17 April 2026 06:56:09 +0000 (0:00:04.230) 0:01:32.321 ********** 2026-04-17 06:56:12.723279 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:12.723296 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:20.054081 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:20.054207 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:20.054224 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:20.054253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:56:20.054266 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:20.054285 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:20.054304 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:56:20.054315 | orchestrator | 2026-04-17 06:56:20.054326 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-17 06:56:20.054337 | orchestrator | Friday 17 April 2026 06:56:19 +0000 (0:00:10.111) 0:01:42.432 ********** 2026-04-17 06:56:20.054347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:20.054365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:32.669261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:32.669461 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:56:32.669487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:32.669503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:32.669517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:32.669529 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:56:32.669561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:32.669590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:32.669603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:32.669614 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:56:32.669625 | orchestrator | 2026-04-17 06:56:32.669637 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-17 06:56:32.669649 | orchestrator | Friday 17 April 2026 06:56:21 +0000 (0:00:02.199) 0:01:44.632 ********** 2026-04-17 06:56:32.669660 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:56:32.669671 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:56:32.669681 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:56:32.669692 | orchestrator | 2026-04-17 06:56:32.669702 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-17 06:56:32.669713 | orchestrator | Friday 17 April 2026 06:56:23 +0000 (0:00:02.075) 0:01:46.708 ********** 2026-04-17 06:56:32.669723 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:56:32.669734 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:56:32.669744 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:56:32.669755 | orchestrator | 2026-04-17 06:56:32.669765 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-17 06:56:32.669776 | orchestrator | Friday 17 April 2026 06:56:25 +0000 (0:00:01.898) 0:01:48.606 ********** 2026-04-17 06:56:32.669787 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-17 06:56:32.669798 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-17 06:56:32.669808 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:56:32.669819 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-17 06:56:32.669830 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-17 06:56:32.669840 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:56:32.669850 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-17 06:56:32.669861 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-17 06:56:32.669872 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:56:32.669882 | orchestrator | 2026-04-17 06:56:32.669893 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-17 06:56:32.669903 | orchestrator | Friday 17 April 2026 06:56:27 +0000 (0:00:01.415) 0:01:50.021 ********** 2026-04-17 06:56:32.669921 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-17 06:56:32.669933 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-17 06:56:32.669944 | orchestrator | 2026-04-17 06:56:32.669955 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-17 06:56:32.669965 | orchestrator | Friday 17 April 2026 06:56:30 +0000 (0:00:03.328) 0:01:53.350 ********** 2026-04-17 06:56:32.669976 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:56:32.669986 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:56:32.669997 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:56:32.670007 | orchestrator | 2026-04-17 06:56:58.763620 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-17 06:56:58.763738 | orchestrator | Friday 17 April 2026 06:56:33 +0000 (0:00:02.970) 0:01:56.320 ********** 2026-04-17 06:56:58.763751 | orchestrator | ok: [testbed-node-0] 2026-04-17 06:56:58.763760 | orchestrator | ok: [testbed-node-2] 2026-04-17 06:56:58.763766 | orchestrator | ok: [testbed-node-1] 2026-04-17 06:56:58.763772 | orchestrator | 2026-04-17 06:56:58.763780 | orchestrator | TASK [nova : Run Nova upgrade checks] ****************************************** 2026-04-17 06:56:58.763786 | orchestrator | Friday 17 April 2026 06:56:36 +0000 (0:00:03.502) 0:01:59.822 ********** 2026-04-17 06:56:58.763793 | orchestrator | changed: [testbed-node-0] 2026-04-17 06:56:58.763800 | orchestrator | 2026-04-17 06:56:58.763807 | orchestrator | TASK [nova : Upgrade status check result] ************************************** 2026-04-17 06:56:58.763814 | orchestrator | Friday 17 April 2026 06:56:56 +0000 (0:00:19.243) 0:02:19.065 ********** 2026-04-17 06:56:58.763834 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:56:58.763840 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:56:58.763847 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:56:58.763853 | orchestrator | 2026-04-17 06:56:58.763859 | orchestrator | TASK [nova : Stopping top level nova services] ********************************* 2026-04-17 06:56:58.763865 | orchestrator | Friday 17 April 2026 06:56:57 +0000 (0:00:01.482) 0:02:20.548 ********** 2026-04-17 06:56:58.763875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:58.763886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:58.763924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:58.763936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:58.763944 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:56:58.763951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:58.763958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:56:58.763965 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:56:58.763972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:56:58.763991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:57:04.055831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:57:04.055960 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:57:04.055979 | orchestrator | 2026-04-17 06:57:04.055992 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-17 06:57:04.056011 | orchestrator | Friday 17 April 2026 06:57:00 +0000 (0:00:02.537) 0:02:23.086 ********** 2026-04-17 06:57:04.056031 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:57:04.056080 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:57:04.056103 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:57:04.056155 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:57:04.056177 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:57:04.056207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 06:57:04.056226 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:57:04.056260 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:57:07.694669 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 06:57:07.694765 | orchestrator | 2026-04-17 06:57:07.694783 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-17 06:57:07.694796 | orchestrator | Friday 17 April 2026 06:57:05 +0000 (0:00:04.975) 0:02:28.061 ********** 2026-04-17 06:57:07.694808 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 06:57:07.694820 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:57:07.694831 | orchestrator | } 2026-04-17 06:57:07.694843 | orchestrator | ok: [testbed-node-1] => { 2026-04-17 06:57:07.694853 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:57:07.694864 | orchestrator | } 2026-04-17 06:57:07.694875 | orchestrator | ok: [testbed-node-2] => { 2026-04-17 06:57:07.694886 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 06:57:07.694920 | orchestrator | } 2026-04-17 06:57:07.694932 | orchestrator | 2026-04-17 06:57:07.694943 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 06:57:07.694954 | orchestrator | Friday 17 April 2026 06:57:06 +0000 (0:00:01.501) 0:02:29.563 ********** 2026-04-17 06:57:07.694969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:57:07.694985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:57:07.694998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:57:07.695010 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:57:07.695054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:57:07.695077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:57:07.695089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:57:07.695101 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:57:07.695113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:57:07.695139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 06:57:52.368379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 06:57:52.368536 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:57:52.368555 | orchestrator | 2026-04-17 06:57:52.368568 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 06:57:52.368581 | orchestrator | Friday 17 April 2026 06:57:09 +0000 (0:00:02.387) 0:02:31.951 ********** 2026-04-17 06:57:52.368592 | orchestrator | 2026-04-17 06:57:52.368603 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 06:57:52.368614 | orchestrator | Friday 17 April 2026 06:57:09 +0000 (0:00:00.493) 0:02:32.445 ********** 2026-04-17 06:57:52.368625 | orchestrator | 2026-04-17 06:57:52.368636 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 06:57:52.368647 | orchestrator | Friday 17 April 2026 06:57:10 +0000 (0:00:00.492) 0:02:32.938 ********** 2026-04-17 06:57:52.368657 | orchestrator | 2026-04-17 06:57:52.368668 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-17 06:57:52.368679 | orchestrator | 2026-04-17 06:57:52.368689 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 06:57:52.368700 | orchestrator | Friday 17 April 2026 06:57:11 +0000 (0:00:01.583) 0:02:34.522 ********** 2026-04-17 06:57:52.368712 | orchestrator | included: /ansible/roles/nova-cell/tasks/upgrade.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:57:52.368725 | orchestrator | 2026-04-17 06:57:52.368736 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-17 06:57:52.368746 | orchestrator | Friday 17 April 2026 06:57:14 +0000 (0:00:02.676) 0:02:37.199 ********** 2026-04-17 06:57:52.368757 | orchestrator | changed: [testbed-node-3] 2026-04-17 06:57:52.368768 | orchestrator | 2026-04-17 06:57:52.368779 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-17 06:57:52.368790 | orchestrator | Friday 17 April 2026 06:57:18 +0000 (0:00:04.528) 0:02:41.727 ********** 2026-04-17 06:57:52.368801 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:57:52.368812 | orchestrator | 2026-04-17 06:57:52.368823 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-17 06:57:52.368833 | orchestrator | Friday 17 April 2026 06:57:21 +0000 (0:00:02.388) 0:02:44.116 ********** 2026-04-17 06:57:52.368845 | orchestrator | included: service-image-info for testbed-node-3 2026-04-17 06:57:52.368856 | orchestrator | 2026-04-17 06:57:52.368867 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-17 06:57:52.368877 | orchestrator | Friday 17 April 2026 06:57:23 +0000 (0:00:02.137) 0:02:46.254 ********** 2026-04-17 06:57:52.368888 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:57:52.368899 | orchestrator | 2026-04-17 06:57:52.368910 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-17 06:57:52.368924 | orchestrator | Friday 17 April 2026 06:57:27 +0000 (0:00:04.527) 0:02:50.781 ********** 2026-04-17 06:57:52.368937 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:57:52.368949 | orchestrator | 2026-04-17 06:57:52.368962 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-17 06:57:52.368975 | orchestrator | Friday 17 April 2026 06:57:31 +0000 (0:00:03.161) 0:02:53.943 ********** 2026-04-17 06:57:52.368987 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:57:52.369021 | orchestrator | 2026-04-17 06:57:52.369034 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-17 06:57:52.369047 | orchestrator | Friday 17 April 2026 06:57:34 +0000 (0:00:03.118) 0:02:57.062 ********** 2026-04-17 06:57:52.369059 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:57:52.369072 | orchestrator | 2026-04-17 06:57:52.369084 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-17 06:57:52.369097 | orchestrator | Friday 17 April 2026 06:57:37 +0000 (0:00:03.208) 0:03:00.270 ********** 2026-04-17 06:57:52.369109 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:57:52.369121 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:57:52.369133 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:57:52.369145 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:57:52.369172 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:57:52.369184 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:57:52.369196 | orchestrator | 2026-04-17 06:57:52.369208 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-17 06:57:52.369220 | orchestrator | Friday 17 April 2026 06:57:42 +0000 (0:00:05.398) 0:03:05.669 ********** 2026-04-17 06:57:52.369231 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:57:52.369242 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:57:52.369252 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:57:52.369264 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:57:52.369274 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:57:52.369284 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:57:52.369295 | orchestrator | 2026-04-17 06:57:52.369306 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-17 06:57:52.369317 | orchestrator | Friday 17 April 2026 06:57:48 +0000 (0:00:05.385) 0:03:11.055 ********** 2026-04-17 06:57:52.369328 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:57:52.369338 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:57:52.369349 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:57:52.369360 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:57:52.369370 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:57:52.369397 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:57:52.369427 | orchestrator | 2026-04-17 06:57:52.369439 | orchestrator | TASK [nova-cell : Stopping nova cell services] ********************************* 2026-04-17 06:57:52.369450 | orchestrator | Friday 17 April 2026 06:57:51 +0000 (0:00:03.053) 0:03:14.109 ********** 2026-04-17 06:57:52.369463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:57:52.369477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:57:52.369490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:57:52.369517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:57:52.369528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:57:52.369548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:58:03.257753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:58:03.257890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:58:03.257908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:58:03.257928 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:58:03.257950 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:58:03.257967 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:58:03.258005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:58:03.258091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:58:03.258108 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:58:03.258141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:58:03.258154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:58:03.258177 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:58:03.258189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:58:03.258201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:58:03.258212 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:58:03.258223 | orchestrator | 2026-04-17 06:58:03.258235 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-17 06:58:03.258247 | orchestrator | Friday 17 April 2026 06:57:54 +0000 (0:00:03.371) 0:03:17.480 ********** 2026-04-17 06:58:03.258258 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:58:03.258269 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:58:03.258282 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:58:03.258296 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:58:03.258310 | orchestrator | 2026-04-17 06:58:03.258329 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 06:58:03.258342 | orchestrator | Friday 17 April 2026 06:57:56 +0000 (0:00:02.274) 0:03:19.755 ********** 2026-04-17 06:58:03.258355 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-17 06:58:03.258367 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-17 06:58:03.258380 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-17 06:58:03.258392 | orchestrator | 2026-04-17 06:58:03.258405 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 06:58:03.258463 | orchestrator | Friday 17 April 2026 06:57:58 +0000 (0:00:01.953) 0:03:21.709 ********** 2026-04-17 06:58:03.258486 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-17 06:58:03.258508 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-17 06:58:03.258522 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-17 06:58:03.258536 | orchestrator | 2026-04-17 06:58:03.258555 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 06:58:03.258574 | orchestrator | Friday 17 April 2026 06:58:01 +0000 (0:00:02.216) 0:03:23.926 ********** 2026-04-17 06:58:03.258593 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-17 06:58:03.258611 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:58:03.258628 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-17 06:58:03.258648 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:58:03.258664 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-17 06:58:03.258681 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:58:03.258698 | orchestrator | 2026-04-17 06:58:03.258715 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-17 06:58:03.258745 | orchestrator | Friday 17 April 2026 06:58:02 +0000 (0:00:01.652) 0:03:25.578 ********** 2026-04-17 06:58:03.258762 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 06:58:03.258779 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 06:58:03.258797 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:58:03.258828 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 06:58:12.879042 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 06:58:12.879153 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 06:58:12.879169 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 06:58:12.879180 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:58:12.879192 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 06:58:12.879203 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 06:58:12.879214 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 06:58:12.879225 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:58:12.879235 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 06:58:12.879246 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 06:58:12.879257 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 06:58:12.879267 | orchestrator | 2026-04-17 06:58:12.879278 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-17 06:58:12.879290 | orchestrator | Friday 17 April 2026 06:58:04 +0000 (0:00:02.116) 0:03:27.695 ********** 2026-04-17 06:58:12.879301 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:58:12.879311 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:58:12.879322 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:58:12.879334 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:58:12.879344 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:58:12.879355 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:58:12.879365 | orchestrator | 2026-04-17 06:58:12.879376 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-17 06:58:12.879387 | orchestrator | Friday 17 April 2026 06:58:07 +0000 (0:00:02.480) 0:03:30.175 ********** 2026-04-17 06:58:12.879398 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:58:12.879408 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:58:12.879419 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:58:12.879491 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:58:12.879503 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:58:12.879513 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:58:12.879524 | orchestrator | 2026-04-17 06:58:12.879535 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-17 06:58:12.879546 | orchestrator | Friday 17 April 2026 06:58:10 +0000 (0:00:03.554) 0:03:33.730 ********** 2026-04-17 06:58:12.879578 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:58:12.879621 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:58:12.879636 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:58:12.879669 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:58:12.879683 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:58:12.879698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:58:12.879716 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:12.879740 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:58:12.879761 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042334 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042488 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042508 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042539 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042592 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042614 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042633 | orchestrator | 2026-04-17 06:58:19.042654 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 06:58:19.042675 | orchestrator | Friday 17 April 2026 06:58:14 +0000 (0:00:03.549) 0:03:37.280 ********** 2026-04-17 06:58:19.042720 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 06:58:19.042742 | orchestrator | 2026-04-17 06:58:19.042760 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-17 06:58:19.042772 | orchestrator | Friday 17 April 2026 06:58:16 +0000 (0:00:02.342) 0:03:39.622 ********** 2026-04-17 06:58:19.042784 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042796 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042826 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042838 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:58:19.042859 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748070 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748179 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748197 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748249 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748275 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748306 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748319 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748331 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748356 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:58:22.748392 | orchestrator | 2026-04-17 06:58:22.748406 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-17 06:58:22.748419 | orchestrator | Friday 17 April 2026 06:58:21 +0000 (0:00:04.761) 0:03:44.384 ********** 2026-04-17 06:58:22.748484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:58:22.748509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:58:23.712669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:58:23.712793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:58:23.712809 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:58:23.712835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:58:23.712846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:58:23.712856 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:58:23.712867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:58:23.712895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:58:23.712907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:58:23.712925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:58:23.712940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:58:23.712950 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:58:23.712960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:58:23.712971 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:58:23.712980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:58:23.712990 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:58:23.713007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:58:26.962246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:58:26.962370 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:58:26.962389 | orchestrator | 2026-04-17 06:58:26.962402 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-17 06:58:26.962415 | orchestrator | Friday 17 April 2026 06:58:24 +0000 (0:00:03.445) 0:03:47.830 ********** 2026-04-17 06:58:26.962477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:58:26.962495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:58:26.962508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:58:26.962538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:58:26.962575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:58:26.962588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:58:26.962606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:58:26.962618 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:58:26.962629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:58:26.962641 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:58:26.962652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:58:26.962664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:58:26.962690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:59:01.086308 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:59:01.086417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:59:01.086459 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:59:01.086486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 06:59:01.086499 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:59:01.086510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 06:59:01.086521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 06:59:01.086551 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:59:01.086562 | orchestrator | 2026-04-17 06:59:01.086572 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 06:59:01.086583 | orchestrator | Friday 17 April 2026 06:58:28 +0000 (0:00:03.790) 0:03:51.620 ********** 2026-04-17 06:59:01.086593 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:59:01.086602 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:59:01.086612 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:59:01.086622 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 06:59:01.086632 | orchestrator | 2026-04-17 06:59:01.086642 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-17 06:59:01.086652 | orchestrator | Friday 17 April 2026 06:58:31 +0000 (0:00:02.348) 0:03:53.968 ********** 2026-04-17 06:59:01.086661 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 06:59:01.086671 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 06:59:01.086681 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 06:59:01.086690 | orchestrator | 2026-04-17 06:59:01.086699 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-17 06:59:01.086709 | orchestrator | Friday 17 April 2026 06:58:33 +0000 (0:00:02.141) 0:03:56.109 ********** 2026-04-17 06:59:01.086718 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 06:59:01.086728 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 06:59:01.086738 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 06:59:01.086747 | orchestrator | 2026-04-17 06:59:01.086757 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-17 06:59:01.086766 | orchestrator | Friday 17 April 2026 06:58:35 +0000 (0:00:02.183) 0:03:58.293 ********** 2026-04-17 06:59:01.086776 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:59:01.086786 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:59:01.086796 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:59:01.086805 | orchestrator | 2026-04-17 06:59:01.086830 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-17 06:59:01.086840 | orchestrator | Friday 17 April 2026 06:58:37 +0000 (0:00:01.843) 0:04:00.137 ********** 2026-04-17 06:59:01.086850 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:59:01.086859 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:59:01.086869 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:59:01.086878 | orchestrator | 2026-04-17 06:59:01.086891 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-17 06:59:01.086908 | orchestrator | Friday 17 April 2026 06:58:38 +0000 (0:00:01.630) 0:04:01.767 ********** 2026-04-17 06:59:01.086924 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-17 06:59:01.086940 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-17 06:59:01.086956 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-17 06:59:01.086972 | orchestrator | 2026-04-17 06:59:01.086987 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-17 06:59:01.087003 | orchestrator | Friday 17 April 2026 06:58:41 +0000 (0:00:02.190) 0:04:03.958 ********** 2026-04-17 06:59:01.087019 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-17 06:59:01.087044 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-17 06:59:01.087063 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-17 06:59:01.087081 | orchestrator | 2026-04-17 06:59:01.087094 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-17 06:59:01.087104 | orchestrator | Friday 17 April 2026 06:58:43 +0000 (0:00:02.161) 0:04:06.119 ********** 2026-04-17 06:59:01.087113 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-17 06:59:01.087133 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-17 06:59:01.087142 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-17 06:59:01.087152 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-17 06:59:01.087161 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-17 06:59:01.087171 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-17 06:59:01.087180 | orchestrator | 2026-04-17 06:59:01.087190 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-17 06:59:01.087199 | orchestrator | Friday 17 April 2026 06:58:48 +0000 (0:00:04.931) 0:04:11.051 ********** 2026-04-17 06:59:01.087209 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:59:01.087219 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:59:01.087234 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:59:01.087250 | orchestrator | 2026-04-17 06:59:01.087266 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-17 06:59:01.087282 | orchestrator | Friday 17 April 2026 06:58:49 +0000 (0:00:01.352) 0:04:12.403 ********** 2026-04-17 06:59:01.087297 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:59:01.087314 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:59:01.087331 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:59:01.087348 | orchestrator | 2026-04-17 06:59:01.087365 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-17 06:59:01.087382 | orchestrator | Friday 17 April 2026 06:58:50 +0000 (0:00:01.383) 0:04:13.787 ********** 2026-04-17 06:59:01.087393 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:59:01.087403 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:59:01.087412 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:59:01.087422 | orchestrator | 2026-04-17 06:59:01.087459 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-17 06:59:01.087469 | orchestrator | Friday 17 April 2026 06:58:53 +0000 (0:00:02.620) 0:04:16.408 ********** 2026-04-17 06:59:01.087480 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-17 06:59:01.087492 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-17 06:59:01.087502 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-17 06:59:01.087512 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-17 06:59:01.087521 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-17 06:59:01.087531 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-17 06:59:01.087541 | orchestrator | 2026-04-17 06:59:01.087551 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-17 06:59:01.087561 | orchestrator | Friday 17 April 2026 06:58:57 +0000 (0:00:04.251) 0:04:20.659 ********** 2026-04-17 06:59:01.087570 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-17 06:59:01.087580 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-17 06:59:01.087590 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-17 06:59:01.087600 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-17 06:59:01.087609 | orchestrator | ok: [testbed-node-3] 2026-04-17 06:59:01.087619 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-17 06:59:01.087629 | orchestrator | ok: [testbed-node-4] 2026-04-17 06:59:01.087654 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-17 06:59:18.593017 | orchestrator | ok: [testbed-node-5] 2026-04-17 06:59:18.593138 | orchestrator | 2026-04-17 06:59:18.593155 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-17 06:59:18.593169 | orchestrator | Friday 17 April 2026 06:59:02 +0000 (0:00:04.383) 0:04:25.042 ********** 2026-04-17 06:59:18.593180 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:59:18.593192 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:59:18.593203 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:59:18.593214 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-04-17 06:59:18.593225 | orchestrator | 2026-04-17 06:59:18.593236 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-17 06:59:18.593247 | orchestrator | Friday 17 April 2026 06:59:05 +0000 (0:00:03.343) 0:04:28.386 ********** 2026-04-17 06:59:18.593258 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 06:59:18.593269 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 06:59:18.593279 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 06:59:18.593290 | orchestrator | 2026-04-17 06:59:18.593301 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-17 06:59:18.593328 | orchestrator | Friday 17 April 2026 06:59:07 +0000 (0:00:02.035) 0:04:30.421 ********** 2026-04-17 06:59:18.593339 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:59:18.593350 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:59:18.593360 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:59:18.593371 | orchestrator | 2026-04-17 06:59:18.593381 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-17 06:59:18.593392 | orchestrator | Friday 17 April 2026 06:59:08 +0000 (0:00:01.396) 0:04:31.818 ********** 2026-04-17 06:59:18.593403 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:59:18.593413 | orchestrator | 2026-04-17 06:59:18.593463 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-17 06:59:18.593476 | orchestrator | Friday 17 April 2026 06:59:10 +0000 (0:00:01.155) 0:04:32.974 ********** 2026-04-17 06:59:18.593487 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:59:18.593498 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:59:18.593508 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:59:18.593519 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:59:18.593529 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:59:18.593540 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:59:18.593551 | orchestrator | 2026-04-17 06:59:18.593562 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-17 06:59:18.593572 | orchestrator | Friday 17 April 2026 06:59:11 +0000 (0:00:01.727) 0:04:34.702 ********** 2026-04-17 06:59:18.593583 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 06:59:18.593593 | orchestrator | 2026-04-17 06:59:18.593604 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-17 06:59:18.593615 | orchestrator | Friday 17 April 2026 06:59:13 +0000 (0:00:01.840) 0:04:36.543 ********** 2026-04-17 06:59:18.593625 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:59:18.593636 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:59:18.593646 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:59:18.593657 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:59:18.593667 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:59:18.593678 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:59:18.593689 | orchestrator | 2026-04-17 06:59:18.593699 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-17 06:59:18.593709 | orchestrator | Friday 17 April 2026 06:59:15 +0000 (0:00:02.036) 0:04:38.579 ********** 2026-04-17 06:59:18.593723 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:59:18.593779 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:59:18.593798 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 06:59:18.593811 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:59:18.593823 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:59:18.593835 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:59:18.593859 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:59:18.593871 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:59:18.593890 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 06:59:21.731276 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:21.731381 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:21.731407 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:21.731494 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:21.731515 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:21.731559 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:21.731580 | orchestrator | 2026-04-17 06:59:21.731601 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-17 06:59:21.731622 | orchestrator | Friday 17 April 2026 06:59:20 +0000 (0:00:04.761) 0:04:43.341 ********** 2026-04-17 06:59:21.731652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:59:21.731672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:59:21.731705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:59:21.731726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:59:21.731759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 06:59:34.240775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 06:59:34.240923 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:34.240981 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:34.240995 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:34.241007 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:59:34.241039 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:59:34.241059 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 06:59:34.241072 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:34.241091 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:34.241102 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 06:59:34.241114 | orchestrator | 2026-04-17 06:59:34.241127 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-17 06:59:34.241139 | orchestrator | Friday 17 April 2026 06:59:28 +0000 (0:00:08.048) 0:04:51.390 ********** 2026-04-17 06:59:34.241151 | orchestrator | skipping: [testbed-node-4] 2026-04-17 06:59:34.241163 | orchestrator | skipping: [testbed-node-3] 2026-04-17 06:59:34.241173 | orchestrator | skipping: [testbed-node-5] 2026-04-17 06:59:34.241184 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:59:34.241195 | orchestrator | skipping: [testbed-node-2] 2026-04-17 06:59:34.241205 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:59:34.241216 | orchestrator | 2026-04-17 06:59:34.241227 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-17 06:59:34.241238 | orchestrator | Friday 17 April 2026 06:59:31 +0000 (0:00:03.151) 0:04:54.542 ********** 2026-04-17 06:59:34.241249 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 06:59:34.241263 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 06:59:34.241275 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 06:59:34.241288 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 06:59:34.241300 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 06:59:34.241313 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 06:59:34.241326 | orchestrator | skipping: [testbed-node-0] 2026-04-17 06:59:34.241338 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 06:59:34.241350 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 06:59:34.241363 | orchestrator | skipping: [testbed-node-1] 2026-04-17 06:59:34.241383 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 07:00:04.200116 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:04.200257 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 07:00:04.200293 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 07:00:04.200326 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 07:00:04.200352 | orchestrator | 2026-04-17 07:00:04.201199 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-17 07:00:04.201229 | orchestrator | Friday 17 April 2026 06:59:36 +0000 (0:00:05.166) 0:04:59.708 ********** 2026-04-17 07:00:04.201241 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:00:04.201252 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:00:04.201263 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:00:04.201274 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:04.201284 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:04.201295 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:04.201306 | orchestrator | 2026-04-17 07:00:04.201317 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-17 07:00:04.201328 | orchestrator | Friday 17 April 2026 06:59:38 +0000 (0:00:01.957) 0:05:01.666 ********** 2026-04-17 07:00:04.201339 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 07:00:04.201351 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 07:00:04.201361 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 07:00:04.201372 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 07:00:04.201384 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 07:00:04.201395 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 07:00:04.201406 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 07:00:04.201464 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 07:00:04.201477 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 07:00:04.201488 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 07:00:04.201499 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:04.201510 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 07:00:04.201520 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:04.201531 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 07:00:04.201542 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 07:00:04.201552 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 07:00:04.201563 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:04.201574 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 07:00:04.201585 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 07:00:04.201595 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 07:00:04.201606 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 07:00:04.201617 | orchestrator | 2026-04-17 07:00:04.201628 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-17 07:00:04.201639 | orchestrator | Friday 17 April 2026 06:59:45 +0000 (0:00:06.206) 0:05:07.872 ********** 2026-04-17 07:00:04.201666 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 07:00:04.201680 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 07:00:04.201699 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 07:00:04.201717 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 07:00:04.201737 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 07:00:04.201756 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 07:00:04.201775 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 07:00:04.201796 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 07:00:04.201816 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 07:00:04.201861 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 07:00:04.201883 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 07:00:04.201914 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 07:00:04.201934 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 07:00:04.201954 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 07:00:04.201975 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:04.201994 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 07:00:04.202006 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 07:00:04.202072 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 07:00:04.202084 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:04.202095 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 07:00:04.202106 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:04.202116 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 07:00:04.202127 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 07:00:04.202138 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 07:00:04.202148 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 07:00:04.202159 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 07:00:04.202169 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 07:00:04.202180 | orchestrator | 2026-04-17 07:00:04.202191 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-17 07:00:04.202202 | orchestrator | Friday 17 April 2026 06:59:53 +0000 (0:00:08.470) 0:05:16.342 ********** 2026-04-17 07:00:04.202212 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:00:04.202223 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:00:04.202234 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:00:04.202245 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:04.202255 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:04.202266 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:04.202277 | orchestrator | 2026-04-17 07:00:04.202288 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-17 07:00:04.202299 | orchestrator | Friday 17 April 2026 06:59:55 +0000 (0:00:01.803) 0:05:18.146 ********** 2026-04-17 07:00:04.202309 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:00:04.202320 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:00:04.202341 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:00:04.202352 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:04.202363 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:04.202373 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:04.202384 | orchestrator | 2026-04-17 07:00:04.202394 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-17 07:00:04.202405 | orchestrator | Friday 17 April 2026 06:59:57 +0000 (0:00:02.045) 0:05:20.192 ********** 2026-04-17 07:00:04.202437 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:04.202448 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:04.202458 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:04.202470 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:00:04.202481 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:00:04.202492 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:00:04.202503 | orchestrator | 2026-04-17 07:00:04.202513 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-17 07:00:04.202524 | orchestrator | Friday 17 April 2026 07:00:00 +0000 (0:00:02.903) 0:05:23.095 ********** 2026-04-17 07:00:04.202535 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:04.202546 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:04.202557 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:04.202574 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:00:04.202594 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:00:04.202613 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:00:04.202632 | orchestrator | 2026-04-17 07:00:04.202651 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-17 07:00:04.202670 | orchestrator | Friday 17 April 2026 07:00:03 +0000 (0:00:03.237) 0:05:26.332 ********** 2026-04-17 07:00:04.202695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 07:00:04.202746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 07:00:05.429710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 07:00:05.429840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 07:00:05.429858 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:00:05.429873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 07:00:05.429886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 07:00:05.429898 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:00:05.429924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 07:00:05.429955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 07:00:05.429978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 07:00:05.429990 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:00:05.430002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 07:00:05.430014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:00:05.430091 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:05.430104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 07:00:05.430121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:00:05.430133 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:05.430152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 07:00:11.515559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:00:11.515663 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:11.515691 | orchestrator | 2026-04-17 07:00:11.515712 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-17 07:00:11.515734 | orchestrator | Friday 17 April 2026 07:00:06 +0000 (0:00:03.079) 0:05:29.412 ********** 2026-04-17 07:00:11.515755 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-17 07:00:11.515775 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-17 07:00:11.515789 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:00:11.515800 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-17 07:00:11.515811 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-17 07:00:11.515822 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:00:11.515833 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-17 07:00:11.515844 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-17 07:00:11.515855 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:00:11.515866 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-17 07:00:11.515877 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-17 07:00:11.515887 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:11.515899 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-17 07:00:11.515910 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-17 07:00:11.515920 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:11.515931 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-17 07:00:11.515942 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-17 07:00:11.515953 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:11.515964 | orchestrator | 2026-04-17 07:00:11.515975 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-17 07:00:11.515985 | orchestrator | Friday 17 April 2026 07:00:08 +0000 (0:00:02.166) 0:05:31.578 ********** 2026-04-17 07:00:11.515998 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 07:00:11.516026 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 07:00:11.516078 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 07:00:11.516094 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 07:00:11.516108 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 07:00:11.516122 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 07:00:11.516135 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 07:00:11.516161 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 07:00:11.516184 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 07:00:16.926939 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 07:00:16.927049 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 07:00:16.927066 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 07:00:16.927080 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:00:16.927131 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:00:16.927161 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:00:16.927173 | orchestrator | 2026-04-17 07:00:16.927186 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-17 07:00:16.927198 | orchestrator | Friday 17 April 2026 07:00:13 +0000 (0:00:05.084) 0:05:36.663 ********** 2026-04-17 07:00:16.927210 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 07:00:16.927222 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:00:16.927233 | orchestrator | } 2026-04-17 07:00:16.927244 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 07:00:16.927255 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:00:16.927266 | orchestrator | } 2026-04-17 07:00:16.927276 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 07:00:16.927287 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:00:16.927297 | orchestrator | } 2026-04-17 07:00:16.927307 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 07:00:16.927318 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:00:16.927328 | orchestrator | } 2026-04-17 07:00:16.927339 | orchestrator | ok: [testbed-node-1] => { 2026-04-17 07:00:16.927349 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:00:16.927360 | orchestrator | } 2026-04-17 07:00:16.927370 | orchestrator | ok: [testbed-node-2] => { 2026-04-17 07:00:16.927380 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:00:16.927391 | orchestrator | } 2026-04-17 07:00:16.927402 | orchestrator | 2026-04-17 07:00:16.927444 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:00:16.927455 | orchestrator | Friday 17 April 2026 07:00:16 +0000 (0:00:02.203) 0:05:38.866 ********** 2026-04-17 07:00:16.927468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 07:00:16.927492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 07:00:16.927512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 07:00:16.927527 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:00:16.927548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 07:00:21.394102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 07:00:21.394214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 07:00:21.394259 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:00:21.394276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 07:00:21.394302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 07:00:21.394314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 07:00:21.394326 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:00:21.394358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 07:00:21.394372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:00:21.394384 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:00:21.394397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 07:00:21.394479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:00:21.394492 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:00:21.394509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 07:00:21.394521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:00:21.394532 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:00:21.394543 | orchestrator | 2026-04-17 07:00:21.394555 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 07:00:21.394567 | orchestrator | Friday 17 April 2026 07:00:19 +0000 (0:00:03.854) 0:05:42.720 ********** 2026-04-17 07:00:21.394582 | orchestrator | 2026-04-17 07:00:21.394594 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 07:00:21.394606 | orchestrator | Friday 17 April 2026 07:00:20 +0000 (0:00:00.597) 0:05:43.318 ********** 2026-04-17 07:00:21.394619 | orchestrator | 2026-04-17 07:00:21.394631 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 07:00:21.394643 | orchestrator | Friday 17 April 2026 07:00:20 +0000 (0:00:00.533) 0:05:43.851 ********** 2026-04-17 07:00:21.394656 | orchestrator | 2026-04-17 07:00:21.394676 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 07:01:54.823213 | orchestrator | Friday 17 April 2026 07:00:21 +0000 (0:00:00.737) 0:05:44.589 ********** 2026-04-17 07:01:54.823354 | orchestrator | 2026-04-17 07:01:54.823381 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 07:01:54.823432 | orchestrator | Friday 17 April 2026 07:00:22 +0000 (0:00:00.511) 0:05:45.100 ********** 2026-04-17 07:01:54.823451 | orchestrator | 2026-04-17 07:01:54.823469 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 07:01:54.823487 | orchestrator | Friday 17 April 2026 07:00:22 +0000 (0:00:00.522) 0:05:45.623 ********** 2026-04-17 07:01:54.823531 | orchestrator | 2026-04-17 07:01:54.823544 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-17 07:01:54.823555 | orchestrator | 2026-04-17 07:01:54.823566 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-17 07:01:54.823577 | orchestrator | Friday 17 April 2026 07:00:24 +0000 (0:00:01.978) 0:05:47.602 ********** 2026-04-17 07:01:54.823588 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:01:54.823601 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:01:54.823611 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:01:54.823622 | orchestrator | 2026-04-17 07:01:54.823633 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-17 07:01:54.823643 | orchestrator | 2026-04-17 07:01:54.823654 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-17 07:01:54.823665 | orchestrator | Friday 17 April 2026 07:00:26 +0000 (0:00:01.667) 0:05:49.269 ********** 2026-04-17 07:01:54.823676 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:01:54.823686 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:01:54.823697 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:01:54.823710 | orchestrator | 2026-04-17 07:01:54.823723 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-17 07:01:54.823736 | orchestrator | 2026-04-17 07:01:54.823748 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-17 07:01:54.823760 | orchestrator | Friday 17 April 2026 07:00:29 +0000 (0:00:02.719) 0:05:51.989 ********** 2026-04-17 07:01:54.823773 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-17 07:01:54.823785 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-17 07:01:54.823799 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-17 07:01:54.823813 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-17 07:01:54.823826 | orchestrator | changed: [testbed-node-0] => (item=nova-conductor) 2026-04-17 07:01:54.823838 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-17 07:01:54.823850 | orchestrator | changed: [testbed-node-2] => (item=nova-conductor) 2026-04-17 07:01:54.823863 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-17 07:01:54.823875 | orchestrator | changed: [testbed-node-1] => (item=nova-conductor) 2026-04-17 07:01:54.823888 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-17 07:01:54.823901 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-17 07:01:54.823914 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-17 07:01:54.823926 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-17 07:01:54.823937 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-17 07:01:54.823947 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-17 07:01:54.823958 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-17 07:01:54.823968 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-17 07:01:54.823979 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-17 07:01:54.823990 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-17 07:01:54.824000 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-17 07:01:54.824011 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-17 07:01:54.824021 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-17 07:01:54.824047 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-17 07:01:54.824058 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-17 07:01:54.824068 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-17 07:01:54.824147 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-17 07:01:54.824157 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-17 07:01:54.824176 | orchestrator | changed: [testbed-node-2] => (item=nova-novncproxy) 2026-04-17 07:01:54.824186 | orchestrator | changed: [testbed-node-0] => (item=nova-novncproxy) 2026-04-17 07:01:54.824195 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-17 07:01:54.824204 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-17 07:01:54.824214 | orchestrator | changed: [testbed-node-1] => (item=nova-novncproxy) 2026-04-17 07:01:54.824223 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-17 07:01:54.824233 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-17 07:01:54.824242 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-17 07:01:54.824251 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-17 07:01:54.824261 | orchestrator | 2026-04-17 07:01:54.824271 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-17 07:01:54.824308 | orchestrator | 2026-04-17 07:01:54.824319 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-17 07:01:54.824329 | orchestrator | Friday 17 April 2026 07:01:02 +0000 (0:00:33.110) 0:06:25.100 ********** 2026-04-17 07:01:54.824338 | orchestrator | changed: [testbed-node-0] => (item=nova-scheduler) 2026-04-17 07:01:54.824370 | orchestrator | changed: [testbed-node-1] => (item=nova-scheduler) 2026-04-17 07:01:54.824380 | orchestrator | changed: [testbed-node-2] => (item=nova-scheduler) 2026-04-17 07:01:54.824422 | orchestrator | changed: [testbed-node-2] => (item=nova-api) 2026-04-17 07:01:54.824432 | orchestrator | changed: [testbed-node-0] => (item=nova-api) 2026-04-17 07:01:54.824442 | orchestrator | changed: [testbed-node-1] => (item=nova-api) 2026-04-17 07:01:54.824451 | orchestrator | 2026-04-17 07:01:54.824461 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-17 07:01:54.824470 | orchestrator | 2026-04-17 07:01:54.824480 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-17 07:01:54.824489 | orchestrator | Friday 17 April 2026 07:01:22 +0000 (0:00:20.059) 0:06:45.160 ********** 2026-04-17 07:01:54.824499 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:01:54.824509 | orchestrator | 2026-04-17 07:01:54.824518 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-17 07:01:54.824527 | orchestrator | 2026-04-17 07:01:54.824537 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-17 07:01:54.824547 | orchestrator | Friday 17 April 2026 07:01:40 +0000 (0:00:17.696) 0:07:02.857 ********** 2026-04-17 07:01:54.824556 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:01:54.824566 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:01:54.824575 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:01:54.824585 | orchestrator | 2026-04-17 07:01:54.824594 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:01:54.824604 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 07:01:54.824617 | orchestrator | testbed-node-0 : ok=39  changed=8  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-17 07:01:54.824626 | orchestrator | testbed-node-1 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-17 07:01:54.824636 | orchestrator | testbed-node-2 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-17 07:01:54.824645 | orchestrator | testbed-node-3 : ok=43  changed=5  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-17 07:01:54.824655 | orchestrator | testbed-node-4 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-17 07:01:54.824672 | orchestrator | testbed-node-5 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-17 07:01:54.824682 | orchestrator | 2026-04-17 07:01:54.824691 | orchestrator | 2026-04-17 07:01:54.824701 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:01:54.824711 | orchestrator | Friday 17 April 2026 07:01:54 +0000 (0:00:14.294) 0:07:17.151 ********** 2026-04-17 07:01:54.824720 | orchestrator | =============================================================================== 2026-04-17 07:01:54.824730 | orchestrator | nova-cell : Reload nova cell services to remove RPC version cap -------- 33.11s 2026-04-17 07:01:54.824739 | orchestrator | nova : Reload nova API services to remove RPC version pin -------------- 20.06s 2026-04-17 07:01:54.824748 | orchestrator | nova : Run Nova upgrade checks ----------------------------------------- 19.24s 2026-04-17 07:01:54.824758 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.05s 2026-04-17 07:01:54.824768 | orchestrator | nova : Run Nova API online database migrations ------------------------- 17.70s 2026-04-17 07:01:54.824777 | orchestrator | nova-cell : Run Nova cell online database migrations ------------------- 14.29s 2026-04-17 07:01:54.824793 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 12.97s 2026-04-17 07:01:54.824802 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.11s 2026-04-17 07:01:54.824812 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.47s 2026-04-17 07:01:54.824822 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.05s 2026-04-17 07:01:54.824831 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 6.21s 2026-04-17 07:01:54.824841 | orchestrator | nova-cell : Get container facts ----------------------------------------- 5.40s 2026-04-17 07:01:54.824850 | orchestrator | nova-cell : Get current Libvirt version --------------------------------- 5.39s 2026-04-17 07:01:54.824860 | orchestrator | nova-cell : Copying over libvirt configuration -------------------------- 5.17s 2026-04-17 07:01:54.824869 | orchestrator | service-check-containers : nova_cell | Check containers ----------------- 5.08s 2026-04-17 07:01:54.824879 | orchestrator | service-check-containers : nova | Check containers ---------------------- 4.98s 2026-04-17 07:01:54.824888 | orchestrator | nova-cell : Copy over ceph.conf ----------------------------------------- 4.93s 2026-04-17 07:01:54.824898 | orchestrator | nova-cell : Flush handlers ---------------------------------------------- 4.88s 2026-04-17 07:01:54.824907 | orchestrator | service-cert-copy : nova | Copying over extra CA certificates ----------- 4.76s 2026-04-17 07:01:54.824917 | orchestrator | nova-cell : Copying over config.json files for services ----------------- 4.76s 2026-04-17 07:01:55.037377 | orchestrator | + osism apply -a upgrade horizon 2026-04-17 07:01:56.479327 | orchestrator | 2026-04-17 07:01:56 | INFO  | Prepare task for execution of horizon. 2026-04-17 07:01:56.556705 | orchestrator | 2026-04-17 07:01:56 | INFO  | Task ec998a0b-f1b2-481c-a949-92555492dc22 (horizon) was prepared for execution. 2026-04-17 07:01:56.556785 | orchestrator | 2026-04-17 07:01:56 | INFO  | It takes a moment until task ec998a0b-f1b2-481c-a949-92555492dc22 (horizon) has been started and output is visible here. 2026-04-17 07:02:10.849643 | orchestrator | 2026-04-17 07:02:10.849763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:02:10.849781 | orchestrator | 2026-04-17 07:02:10.849794 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:02:10.849806 | orchestrator | Friday 17 April 2026 07:02:01 +0000 (0:00:01.886) 0:00:01.886 ********** 2026-04-17 07:02:10.849817 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:02:10.849830 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:02:10.849841 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:02:10.849852 | orchestrator | 2026-04-17 07:02:10.849864 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:02:10.849897 | orchestrator | Friday 17 April 2026 07:02:03 +0000 (0:00:01.867) 0:00:03.754 ********** 2026-04-17 07:02:10.849909 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-17 07:02:10.849920 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-17 07:02:10.849931 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-17 07:02:10.849942 | orchestrator | 2026-04-17 07:02:10.849953 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-17 07:02:10.849965 | orchestrator | 2026-04-17 07:02:10.849976 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 07:02:10.849987 | orchestrator | Friday 17 April 2026 07:02:05 +0000 (0:00:01.474) 0:00:05.229 ********** 2026-04-17 07:02:10.849999 | orchestrator | included: /ansible/roles/horizon/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:02:10.850011 | orchestrator | 2026-04-17 07:02:10.850086 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-17 07:02:10.850097 | orchestrator | Friday 17 April 2026 07:02:07 +0000 (0:00:01.947) 0:00:07.176 ********** 2026-04-17 07:02:10.850143 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:02:10.850218 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:02:10.850253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:02:10.850269 | orchestrator | 2026-04-17 07:02:10.850282 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-17 07:02:10.850295 | orchestrator | Friday 17 April 2026 07:02:10 +0000 (0:00:03.352) 0:00:10.529 ********** 2026-04-17 07:02:10.850315 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:02:10.850327 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:02:10.850340 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:02:10.850353 | orchestrator | 2026-04-17 07:02:10.850372 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 07:02:38.476960 | orchestrator | Friday 17 April 2026 07:02:12 +0000 (0:00:01.689) 0:00:12.219 ********** 2026-04-17 07:02:38.477066 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 07:02:38.477081 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 07:02:38.477091 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 07:02:38.477101 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 07:02:38.477110 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 07:02:38.477119 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 07:02:38.477128 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-17 07:02:38.477136 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 07:02:38.477145 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 07:02:38.477154 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 07:02:38.477163 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 07:02:38.477172 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 07:02:38.477180 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 07:02:38.477189 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 07:02:38.477198 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-17 07:02:38.477207 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 07:02:38.477215 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 07:02:38.477224 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 07:02:38.477233 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 07:02:38.477242 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 07:02:38.477250 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 07:02:38.477259 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 07:02:38.477268 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-17 07:02:38.477276 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 07:02:38.477287 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-17 07:02:38.477297 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-17 07:02:38.477322 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-17 07:02:38.477332 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-17 07:02:38.477340 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-17 07:02:38.477366 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-17 07:02:38.477437 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-17 07:02:38.477447 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-17 07:02:38.477456 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-17 07:02:38.477466 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-17 07:02:38.477474 | orchestrator | 2026-04-17 07:02:38.477484 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:02:38.477493 | orchestrator | Friday 17 April 2026 07:02:14 +0000 (0:00:02.194) 0:00:14.413 ********** 2026-04-17 07:02:38.477502 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:02:38.477513 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:02:38.477523 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:02:38.477533 | orchestrator | 2026-04-17 07:02:38.477559 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:02:38.477570 | orchestrator | Friday 17 April 2026 07:02:16 +0000 (0:00:01.652) 0:00:16.066 ********** 2026-04-17 07:02:38.477581 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.477592 | orchestrator | 2026-04-17 07:02:38.477602 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:02:38.477612 | orchestrator | Friday 17 April 2026 07:02:17 +0000 (0:00:01.205) 0:00:17.271 ********** 2026-04-17 07:02:38.477621 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.477631 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:02:38.477641 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:02:38.477651 | orchestrator | 2026-04-17 07:02:38.477661 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:02:38.477671 | orchestrator | Friday 17 April 2026 07:02:18 +0000 (0:00:01.489) 0:00:18.760 ********** 2026-04-17 07:02:38.477681 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:02:38.477691 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:02:38.477700 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:02:38.477710 | orchestrator | 2026-04-17 07:02:38.477720 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:02:38.477730 | orchestrator | Friday 17 April 2026 07:02:20 +0000 (0:00:01.620) 0:00:20.381 ********** 2026-04-17 07:02:38.477740 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.477749 | orchestrator | 2026-04-17 07:02:38.477758 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:02:38.477768 | orchestrator | Friday 17 April 2026 07:02:21 +0000 (0:00:01.121) 0:00:21.502 ********** 2026-04-17 07:02:38.477778 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.477787 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:02:38.477797 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:02:38.477807 | orchestrator | 2026-04-17 07:02:38.477816 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:02:38.477826 | orchestrator | Friday 17 April 2026 07:02:22 +0000 (0:00:01.441) 0:00:22.943 ********** 2026-04-17 07:02:38.477836 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:02:38.477846 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:02:38.477856 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:02:38.477866 | orchestrator | 2026-04-17 07:02:38.477875 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:02:38.477884 | orchestrator | Friday 17 April 2026 07:02:24 +0000 (0:00:01.497) 0:00:24.440 ********** 2026-04-17 07:02:38.477900 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.477909 | orchestrator | 2026-04-17 07:02:38.477917 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:02:38.477926 | orchestrator | Friday 17 April 2026 07:02:25 +0000 (0:00:01.168) 0:00:25.609 ********** 2026-04-17 07:02:38.477934 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.477943 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:02:38.477951 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:02:38.477960 | orchestrator | 2026-04-17 07:02:38.477969 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:02:38.477977 | orchestrator | Friday 17 April 2026 07:02:27 +0000 (0:00:01.657) 0:00:27.267 ********** 2026-04-17 07:02:38.477986 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:02:38.477995 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:02:38.478003 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:02:38.478012 | orchestrator | 2026-04-17 07:02:38.478083 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:02:38.478093 | orchestrator | Friday 17 April 2026 07:02:28 +0000 (0:00:01.415) 0:00:28.683 ********** 2026-04-17 07:02:38.478101 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.478110 | orchestrator | 2026-04-17 07:02:38.478119 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:02:38.478134 | orchestrator | Friday 17 April 2026 07:02:29 +0000 (0:00:01.176) 0:00:29.859 ********** 2026-04-17 07:02:38.478143 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.478151 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:02:38.478160 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:02:38.478169 | orchestrator | 2026-04-17 07:02:38.478177 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:02:38.478186 | orchestrator | Friday 17 April 2026 07:02:31 +0000 (0:00:01.493) 0:00:31.353 ********** 2026-04-17 07:02:38.478195 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:02:38.478203 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:02:38.478212 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:02:38.478221 | orchestrator | 2026-04-17 07:02:38.478229 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:02:38.478238 | orchestrator | Friday 17 April 2026 07:02:32 +0000 (0:00:01.663) 0:00:33.016 ********** 2026-04-17 07:02:38.478246 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.478255 | orchestrator | 2026-04-17 07:02:38.478264 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:02:38.478272 | orchestrator | Friday 17 April 2026 07:02:34 +0000 (0:00:01.160) 0:00:34.177 ********** 2026-04-17 07:02:38.478281 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.478289 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:02:38.478298 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:02:38.478307 | orchestrator | 2026-04-17 07:02:38.478315 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:02:38.478324 | orchestrator | Friday 17 April 2026 07:02:35 +0000 (0:00:01.423) 0:00:35.601 ********** 2026-04-17 07:02:38.478332 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:02:38.478341 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:02:38.478350 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:02:38.478359 | orchestrator | 2026-04-17 07:02:38.478367 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:02:38.478399 | orchestrator | Friday 17 April 2026 07:02:36 +0000 (0:00:01.394) 0:00:36.996 ********** 2026-04-17 07:02:38.478409 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.478417 | orchestrator | 2026-04-17 07:02:38.478426 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:02:38.478435 | orchestrator | Friday 17 April 2026 07:02:38 +0000 (0:00:01.186) 0:00:38.183 ********** 2026-04-17 07:02:38.478443 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:02:38.478452 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:02:38.478468 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:14.111108 | orchestrator | 2026-04-17 07:03:14.111227 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:03:14.111246 | orchestrator | Friday 17 April 2026 07:02:39 +0000 (0:00:01.442) 0:00:39.625 ********** 2026-04-17 07:03:14.111259 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:03:14.111271 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:03:14.111282 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:03:14.111294 | orchestrator | 2026-04-17 07:03:14.111305 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:03:14.111317 | orchestrator | Friday 17 April 2026 07:02:41 +0000 (0:00:01.434) 0:00:41.060 ********** 2026-04-17 07:03:14.111328 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.111340 | orchestrator | 2026-04-17 07:03:14.111351 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:03:14.111411 | orchestrator | Friday 17 April 2026 07:02:42 +0000 (0:00:01.135) 0:00:42.195 ********** 2026-04-17 07:03:14.111424 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.111435 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:03:14.111446 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:14.111457 | orchestrator | 2026-04-17 07:03:14.111468 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:03:14.111479 | orchestrator | Friday 17 April 2026 07:02:43 +0000 (0:00:01.396) 0:00:43.591 ********** 2026-04-17 07:03:14.111489 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:03:14.111500 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:03:14.111511 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:03:14.111521 | orchestrator | 2026-04-17 07:03:14.111532 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:03:14.111543 | orchestrator | Friday 17 April 2026 07:02:45 +0000 (0:00:01.598) 0:00:45.190 ********** 2026-04-17 07:03:14.111554 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.111564 | orchestrator | 2026-04-17 07:03:14.111576 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:03:14.111587 | orchestrator | Friday 17 April 2026 07:02:46 +0000 (0:00:01.108) 0:00:46.299 ********** 2026-04-17 07:03:14.111598 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.111609 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:03:14.111620 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:14.111630 | orchestrator | 2026-04-17 07:03:14.111641 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:03:14.111654 | orchestrator | Friday 17 April 2026 07:02:47 +0000 (0:00:01.548) 0:00:47.847 ********** 2026-04-17 07:03:14.111667 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:03:14.111680 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:03:14.111692 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:03:14.111704 | orchestrator | 2026-04-17 07:03:14.111717 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:03:14.111730 | orchestrator | Friday 17 April 2026 07:02:49 +0000 (0:00:01.439) 0:00:49.287 ********** 2026-04-17 07:03:14.111742 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.111754 | orchestrator | 2026-04-17 07:03:14.111767 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:03:14.111780 | orchestrator | Friday 17 April 2026 07:02:50 +0000 (0:00:01.169) 0:00:50.457 ********** 2026-04-17 07:03:14.111792 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.111804 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:03:14.111817 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:14.111830 | orchestrator | 2026-04-17 07:03:14.111843 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 07:03:14.111855 | orchestrator | Friday 17 April 2026 07:02:51 +0000 (0:00:01.479) 0:00:51.936 ********** 2026-04-17 07:03:14.111868 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:03:14.111880 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:03:14.111909 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:03:14.111942 | orchestrator | 2026-04-17 07:03:14.111953 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 07:03:14.111964 | orchestrator | Friday 17 April 2026 07:02:53 +0000 (0:00:01.372) 0:00:53.309 ********** 2026-04-17 07:03:14.111975 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.111985 | orchestrator | 2026-04-17 07:03:14.111996 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 07:03:14.112007 | orchestrator | Friday 17 April 2026 07:02:54 +0000 (0:00:01.104) 0:00:54.414 ********** 2026-04-17 07:03:14.112017 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.112028 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:03:14.112039 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:14.112050 | orchestrator | 2026-04-17 07:03:14.112060 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-17 07:03:14.112071 | orchestrator | Friday 17 April 2026 07:02:55 +0000 (0:00:01.367) 0:00:55.781 ********** 2026-04-17 07:03:14.112082 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:03:14.112092 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:03:14.112103 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:03:14.112114 | orchestrator | 2026-04-17 07:03:14.112124 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-17 07:03:14.112135 | orchestrator | Friday 17 April 2026 07:02:59 +0000 (0:00:03.389) 0:00:59.170 ********** 2026-04-17 07:03:14.112146 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 07:03:14.112157 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 07:03:14.112168 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 07:03:14.112178 | orchestrator | 2026-04-17 07:03:14.112189 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-17 07:03:14.112199 | orchestrator | Friday 17 April 2026 07:03:02 +0000 (0:00:03.052) 0:01:02.223 ********** 2026-04-17 07:03:14.112210 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 07:03:14.112222 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 07:03:14.112249 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 07:03:14.112260 | orchestrator | 2026-04-17 07:03:14.112271 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-17 07:03:14.112282 | orchestrator | Friday 17 April 2026 07:03:05 +0000 (0:00:03.030) 0:01:05.253 ********** 2026-04-17 07:03:14.112293 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 07:03:14.112304 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 07:03:14.112314 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 07:03:14.112325 | orchestrator | 2026-04-17 07:03:14.112336 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-17 07:03:14.112346 | orchestrator | Friday 17 April 2026 07:03:07 +0000 (0:00:02.520) 0:01:07.774 ********** 2026-04-17 07:03:14.112357 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.112415 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:03:14.112427 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:14.112438 | orchestrator | 2026-04-17 07:03:14.112448 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-17 07:03:14.112460 | orchestrator | Friday 17 April 2026 07:03:09 +0000 (0:00:01.353) 0:01:09.127 ********** 2026-04-17 07:03:14.112470 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:14.112481 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:03:14.112492 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:14.112503 | orchestrator | 2026-04-17 07:03:14.112513 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 07:03:14.112534 | orchestrator | Friday 17 April 2026 07:03:10 +0000 (0:00:01.602) 0:01:10.730 ********** 2026-04-17 07:03:14.112544 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:03:14.112555 | orchestrator | 2026-04-17 07:03:14.112566 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-17 07:03:14.112577 | orchestrator | Friday 17 April 2026 07:03:12 +0000 (0:00:01.803) 0:01:12.534 ********** 2026-04-17 07:03:14.112600 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:03:14.112630 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:03:15.920609 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:03:15.920694 | orchestrator | 2026-04-17 07:03:15.920707 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-17 07:03:15.920717 | orchestrator | Friday 17 April 2026 07:03:15 +0000 (0:00:02.833) 0:01:15.367 ********** 2026-04-17 07:03:15.920741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:03:15.920775 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:15.920784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:03:15.920793 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:03:15.920817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:03:20.869480 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:20.869590 | orchestrator | 2026-04-17 07:03:20.869607 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-17 07:03:20.869621 | orchestrator | Friday 17 April 2026 07:03:17 +0000 (0:00:01.691) 0:01:17.059 ********** 2026-04-17 07:03:20.869637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:03:20.869677 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:20.869727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:03:20.869743 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:03:20.869755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:03:20.869775 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:03:20.869787 | orchestrator | 2026-04-17 07:03:20.869798 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-17 07:03:20.869809 | orchestrator | Friday 17 April 2026 07:03:19 +0000 (0:00:02.274) 0:01:19.334 ********** 2026-04-17 07:03:20.869837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:03:24.122956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:03:24.123124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 07:03:24.123153 | orchestrator | 2026-04-17 07:03:24.123167 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-17 07:03:24.123180 | orchestrator | Friday 17 April 2026 07:03:22 +0000 (0:00:02.936) 0:01:22.270 ********** 2026-04-17 07:03:24.123192 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:03:24.123203 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:03:24.123214 | orchestrator | } 2026-04-17 07:03:24.123225 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:03:24.123236 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:03:24.123264 | orchestrator | } 2026-04-17 07:03:24.123276 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:03:24.123286 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:03:24.123297 | orchestrator | } 2026-04-17 07:03:24.123308 | orchestrator | 2026-04-17 07:03:24.123320 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:03:24.123331 | orchestrator | Friday 17 April 2026 07:03:23 +0000 (0:00:01.392) 0:01:23.663 ********** 2026-04-17 07:03:24.123349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:03:24.123436 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:03:24.123463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:04:40.915095 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:04:40.915198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 07:04:40.915226 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:04:40.915232 | orchestrator | 2026-04-17 07:04:40.915239 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 07:04:40.915245 | orchestrator | Friday 17 April 2026 07:03:25 +0000 (0:00:02.369) 0:01:26.033 ********** 2026-04-17 07:04:40.915251 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:04:40.915256 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:04:40.915261 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:04:40.915267 | orchestrator | 2026-04-17 07:04:40.915272 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 07:04:40.915278 | orchestrator | Friday 17 April 2026 07:03:27 +0000 (0:00:01.472) 0:01:27.505 ********** 2026-04-17 07:04:40.915284 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:04:40.915290 | orchestrator | 2026-04-17 07:04:40.915295 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-17 07:04:40.915300 | orchestrator | Friday 17 April 2026 07:03:29 +0000 (0:00:01.725) 0:01:29.230 ********** 2026-04-17 07:04:40.915306 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:04:40.915311 | orchestrator | 2026-04-17 07:04:40.915316 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 07:04:40.915322 | orchestrator | Friday 17 April 2026 07:04:03 +0000 (0:00:34.296) 0:02:03.527 ********** 2026-04-17 07:04:40.915371 | orchestrator | 2026-04-17 07:04:40.915377 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 07:04:40.915383 | orchestrator | Friday 17 April 2026 07:04:04 +0000 (0:00:00.659) 0:02:04.186 ********** 2026-04-17 07:04:40.915388 | orchestrator | 2026-04-17 07:04:40.915393 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 07:04:40.915399 | orchestrator | Friday 17 April 2026 07:04:04 +0000 (0:00:00.478) 0:02:04.664 ********** 2026-04-17 07:04:40.915404 | orchestrator | 2026-04-17 07:04:40.915409 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-17 07:04:40.915418 | orchestrator | Friday 17 April 2026 07:04:05 +0000 (0:00:00.789) 0:02:05.454 ********** 2026-04-17 07:04:40.915428 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:04:40.915437 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:04:40.915445 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:04:40.915458 | orchestrator | 2026-04-17 07:04:40.915471 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:04:40.915481 | orchestrator | testbed-node-0 : ok=36  changed=6  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-17 07:04:40.915506 | orchestrator | testbed-node-1 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-17 07:04:40.915515 | orchestrator | testbed-node-2 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-17 07:04:40.915523 | orchestrator | 2026-04-17 07:04:40.915532 | orchestrator | 2026-04-17 07:04:40.915540 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:04:40.915549 | orchestrator | Friday 17 April 2026 07:04:40 +0000 (0:00:35.042) 0:02:40.497 ********** 2026-04-17 07:04:40.915559 | orchestrator | =============================================================================== 2026-04-17 07:04:40.915567 | orchestrator | horizon : Restart horizon container ------------------------------------ 35.04s 2026-04-17 07:04:40.915576 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 34.30s 2026-04-17 07:04:40.915584 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.39s 2026-04-17 07:04:40.915593 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 3.35s 2026-04-17 07:04:40.915601 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.05s 2026-04-17 07:04:40.915626 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.03s 2026-04-17 07:04:40.915636 | orchestrator | service-check-containers : horizon | Check containers ------------------- 2.94s 2026-04-17 07:04:40.915645 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.83s 2026-04-17 07:04:40.915653 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.52s 2026-04-17 07:04:40.915663 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.37s 2026-04-17 07:04:40.915671 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 2.28s 2026-04-17 07:04:40.915678 | orchestrator | horizon : include_tasks ------------------------------------------------- 2.19s 2026-04-17 07:04:40.915685 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.95s 2026-04-17 07:04:40.915694 | orchestrator | horizon : Flush handlers ------------------------------------------------ 1.93s 2026-04-17 07:04:40.915703 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.87s 2026-04-17 07:04:40.915712 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.80s 2026-04-17 07:04:40.915721 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.73s 2026-04-17 07:04:40.915731 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.69s 2026-04-17 07:04:40.915740 | orchestrator | horizon : Set empty custom policy --------------------------------------- 1.69s 2026-04-17 07:04:40.915750 | orchestrator | horizon : Update policy file name --------------------------------------- 1.66s 2026-04-17 07:04:41.122577 | orchestrator | + osism apply -a upgrade skyline 2026-04-17 07:04:42.460244 | orchestrator | 2026-04-17 07:04:42 | INFO  | Prepare task for execution of skyline. 2026-04-17 07:04:42.529621 | orchestrator | 2026-04-17 07:04:42 | INFO  | Task 5dd5c944-a0f4-4190-9fc3-086c3332c4aa (skyline) was prepared for execution. 2026-04-17 07:04:42.529853 | orchestrator | 2026-04-17 07:04:42 | INFO  | It takes a moment until task 5dd5c944-a0f4-4190-9fc3-086c3332c4aa (skyline) has been started and output is visible here. 2026-04-17 07:05:01.510315 | orchestrator | 2026-04-17 07:05:01.510424 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:05:01.510431 | orchestrator | 2026-04-17 07:05:01.510436 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:05:01.510440 | orchestrator | Friday 17 April 2026 07:04:47 +0000 (0:00:01.649) 0:00:01.649 ********** 2026-04-17 07:05:01.510445 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:05:01.510450 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:05:01.510454 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:05:01.510458 | orchestrator | 2026-04-17 07:05:01.510462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:05:01.510466 | orchestrator | Friday 17 April 2026 07:04:49 +0000 (0:00:01.875) 0:00:03.525 ********** 2026-04-17 07:05:01.510469 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-17 07:05:01.510474 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-17 07:05:01.510477 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-17 07:05:01.510481 | orchestrator | 2026-04-17 07:05:01.510485 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-17 07:05:01.510489 | orchestrator | 2026-04-17 07:05:01.510493 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-17 07:05:01.510496 | orchestrator | Friday 17 April 2026 07:04:51 +0000 (0:00:01.986) 0:00:05.511 ********** 2026-04-17 07:05:01.510501 | orchestrator | included: /ansible/roles/skyline/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:05:01.510506 | orchestrator | 2026-04-17 07:05:01.510510 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-17 07:05:01.510514 | orchestrator | Friday 17 April 2026 07:04:54 +0000 (0:00:03.329) 0:00:08.840 ********** 2026-04-17 07:05:01.510537 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:01.510553 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:01.510569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:01.510574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:01.510582 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:01.510589 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:01.510593 | orchestrator | 2026-04-17 07:05:01.510597 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-17 07:05:01.510601 | orchestrator | Friday 17 April 2026 07:04:58 +0000 (0:00:03.319) 0:00:12.160 ********** 2026-04-17 07:05:01.510605 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:05:01.510609 | orchestrator | 2026-04-17 07:05:01.510613 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-17 07:05:01.510616 | orchestrator | Friday 17 April 2026 07:05:00 +0000 (0:00:01.927) 0:00:14.087 ********** 2026-04-17 07:05:01.510625 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:03.958984 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:03.959133 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:03.959152 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:03.959184 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:03.959207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:03.959220 | orchestrator | 2026-04-17 07:05:03.959233 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-17 07:05:03.959246 | orchestrator | Friday 17 April 2026 07:05:03 +0000 (0:00:03.334) 0:00:17.422 ********** 2026-04-17 07:05:03.959264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:05:03.959277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:05:03.959289 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:05:03.959312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:05:05.773471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:05:05.773556 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:05:05.773586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:05:05.773601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:05:05.773613 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:05:05.773625 | orchestrator | 2026-04-17 07:05:05.773637 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-17 07:05:05.773669 | orchestrator | Friday 17 April 2026 07:05:05 +0000 (0:00:01.689) 0:00:19.111 ********** 2026-04-17 07:05:05.773698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:05:05.773712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:05:05.773729 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:05:05.773742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:05:05.773755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:05:05.773775 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:05:05.773794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:05:15.186984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:05:15.187109 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:05:15.187128 | orchestrator | 2026-04-17 07:05:15.187141 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-17 07:05:15.187153 | orchestrator | Friday 17 April 2026 07:05:07 +0000 (0:00:01.900) 0:00:21.012 ********** 2026-04-17 07:05:15.187166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:15.187201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:15.187233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:15.187253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:15.187267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:15.187289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:15.187301 | orchestrator | 2026-04-17 07:05:15.187371 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-17 07:05:15.187384 | orchestrator | Friday 17 April 2026 07:05:10 +0000 (0:00:03.710) 0:00:24.722 ********** 2026-04-17 07:05:15.187395 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-17 07:05:15.187407 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-17 07:05:15.187418 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-17 07:05:15.187428 | orchestrator | 2026-04-17 07:05:15.187439 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-17 07:05:15.187450 | orchestrator | Friday 17 April 2026 07:05:13 +0000 (0:00:02.685) 0:00:27.407 ********** 2026-04-17 07:05:15.187470 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-17 07:05:23.305064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-17 07:05:23.305192 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-17 07:05:23.305212 | orchestrator | 2026-04-17 07:05:23.305225 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-17 07:05:23.305237 | orchestrator | Friday 17 April 2026 07:05:16 +0000 (0:00:03.064) 0:00:30.472 ********** 2026-04-17 07:05:23.305272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:23.305291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:23.305394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:23.305582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:23.305614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:23.305630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:23.305656 | orchestrator | 2026-04-17 07:05:23.305670 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-17 07:05:23.305683 | orchestrator | Friday 17 April 2026 07:05:20 +0000 (0:00:03.829) 0:00:34.302 ********** 2026-04-17 07:05:23.305696 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:05:23.305709 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:05:23.305728 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:05:23.305746 | orchestrator | 2026-04-17 07:05:23.305765 | orchestrator | TASK [service-check-containers : skyline | Check containers] ******************* 2026-04-17 07:05:23.305784 | orchestrator | Friday 17 April 2026 07:05:22 +0000 (0:00:01.691) 0:00:35.994 ********** 2026-04-17 07:05:23.305805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:23.305840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:27.398413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-17 07:05:27.398510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:27.398543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:27.398562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-17 07:05:27.398567 | orchestrator | 2026-04-17 07:05:27.398576 | orchestrator | TASK [service-check-containers : skyline | Notify handlers to restart containers] *** 2026-04-17 07:05:27.398582 | orchestrator | Friday 17 April 2026 07:05:25 +0000 (0:00:03.456) 0:00:39.450 ********** 2026-04-17 07:05:27.398588 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:05:27.398593 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:05:27.398597 | orchestrator | } 2026-04-17 07:05:27.398601 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:05:27.398605 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:05:27.398609 | orchestrator | } 2026-04-17 07:05:27.398614 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:05:27.398618 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:05:27.398622 | orchestrator | } 2026-04-17 07:05:27.398626 | orchestrator | 2026-04-17 07:05:27.398630 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:05:27.398634 | orchestrator | Friday 17 April 2026 07:05:26 +0000 (0:00:01.378) 0:00:40.829 ********** 2026-04-17 07:05:27.398638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:05:27.398643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:05:27.398648 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:05:27.398652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:05:27.398667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:06:02.742224 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:06:02.742399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-17 07:06:02.742424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-17 07:06:02.742439 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:06:02.742451 | orchestrator | 2026-04-17 07:06:02.742463 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-17 07:06:02.742476 | orchestrator | Friday 17 April 2026 07:05:28 +0000 (0:00:02.083) 0:00:42.913 ********** 2026-04-17 07:06:02.742487 | orchestrator | 2026-04-17 07:06:02.742498 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-17 07:06:02.742509 | orchestrator | Friday 17 April 2026 07:05:29 +0000 (0:00:00.440) 0:00:43.353 ********** 2026-04-17 07:06:02.742544 | orchestrator | 2026-04-17 07:06:02.742555 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-17 07:06:02.742566 | orchestrator | Friday 17 April 2026 07:05:29 +0000 (0:00:00.427) 0:00:43.780 ********** 2026-04-17 07:06:02.742577 | orchestrator | 2026-04-17 07:06:02.742588 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-17 07:06:02.742599 | orchestrator | Friday 17 April 2026 07:05:30 +0000 (0:00:00.852) 0:00:44.633 ********** 2026-04-17 07:06:02.742610 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:06:02.742622 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:06:02.742632 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:06:02.742643 | orchestrator | 2026-04-17 07:06:02.742654 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-17 07:06:02.742665 | orchestrator | Friday 17 April 2026 07:05:45 +0000 (0:00:14.515) 0:00:59.149 ********** 2026-04-17 07:06:02.742676 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:06:02.742687 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:06:02.742698 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:06:02.742708 | orchestrator | 2026-04-17 07:06:02.742719 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:06:02.742732 | orchestrator | testbed-node-0 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 07:06:02.742743 | orchestrator | testbed-node-1 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 07:06:02.742756 | orchestrator | testbed-node-2 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 07:06:02.742820 | orchestrator | 2026-04-17 07:06:02.742835 | orchestrator | 2026-04-17 07:06:02.742848 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:06:02.742862 | orchestrator | Friday 17 April 2026 07:06:02 +0000 (0:00:17.147) 0:01:16.297 ********** 2026-04-17 07:06:02.742874 | orchestrator | =============================================================================== 2026-04-17 07:06:02.742905 | orchestrator | skyline : Restart skyline-console container ---------------------------- 17.15s 2026-04-17 07:06:02.742918 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 14.52s 2026-04-17 07:06:02.742931 | orchestrator | skyline : Copying over config.json files for services ------------------- 3.83s 2026-04-17 07:06:02.742944 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 3.71s 2026-04-17 07:06:02.742957 | orchestrator | service-check-containers : skyline | Check containers ------------------- 3.46s 2026-04-17 07:06:02.742970 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 3.33s 2026-04-17 07:06:02.742983 | orchestrator | skyline : include_tasks ------------------------------------------------- 3.33s 2026-04-17 07:06:02.742995 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 3.32s 2026-04-17 07:06:02.743009 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 3.07s 2026-04-17 07:06:02.743021 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 2.68s 2026-04-17 07:06:02.743034 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.08s 2026-04-17 07:06:02.743047 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.99s 2026-04-17 07:06:02.743060 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.93s 2026-04-17 07:06:02.743072 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.90s 2026-04-17 07:06:02.743085 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.88s 2026-04-17 07:06:02.743098 | orchestrator | skyline : Flush handlers ------------------------------------------------ 1.72s 2026-04-17 07:06:02.743111 | orchestrator | skyline : Copying over custom logos ------------------------------------- 1.69s 2026-04-17 07:06:02.743131 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS certificate --- 1.69s 2026-04-17 07:06:02.743143 | orchestrator | service-check-containers : skyline | Notify handlers to restart containers --- 1.38s 2026-04-17 07:06:02.930469 | orchestrator | + osism apply -a upgrade glance 2026-04-17 07:06:04.227573 | orchestrator | 2026-04-17 07:06:04 | INFO  | Prepare task for execution of glance. 2026-04-17 07:06:04.307114 | orchestrator | 2026-04-17 07:06:04 | INFO  | Task a9011e45-8ba3-4d78-b7a0-1fc3ba3cd5f2 (glance) was prepared for execution. 2026-04-17 07:06:04.307210 | orchestrator | 2026-04-17 07:06:04 | INFO  | It takes a moment until task a9011e45-8ba3-4d78-b7a0-1fc3ba3cd5f2 (glance) has been started and output is visible here. 2026-04-17 07:06:49.371830 | orchestrator | 2026-04-17 07:06:49.371947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:06:49.371966 | orchestrator | 2026-04-17 07:06:49.371986 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:06:49.372006 | orchestrator | Friday 17 April 2026 07:06:09 +0000 (0:00:01.731) 0:00:01.731 ********** 2026-04-17 07:06:49.372024 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:06:49.372045 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:06:49.372065 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:06:49.372085 | orchestrator | 2026-04-17 07:06:49.372103 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:06:49.372123 | orchestrator | Friday 17 April 2026 07:06:11 +0000 (0:00:01.793) 0:00:03.525 ********** 2026-04-17 07:06:49.372143 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-17 07:06:49.372163 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-17 07:06:49.372182 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-17 07:06:49.372200 | orchestrator | 2026-04-17 07:06:49.372216 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-17 07:06:49.372227 | orchestrator | 2026-04-17 07:06:49.372238 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:06:49.372248 | orchestrator | Friday 17 April 2026 07:06:13 +0000 (0:00:02.020) 0:00:05.545 ********** 2026-04-17 07:06:49.372260 | orchestrator | included: /ansible/roles/glance/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:06:49.372340 | orchestrator | 2026-04-17 07:06:49.372355 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:06:49.372366 | orchestrator | Friday 17 April 2026 07:06:15 +0000 (0:00:02.370) 0:00:07.915 ********** 2026-04-17 07:06:49.372380 | orchestrator | included: /ansible/roles/glance/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:06:49.372393 | orchestrator | 2026-04-17 07:06:49.372406 | orchestrator | TASK [glance : Start Glance upgrade] ******************************************* 2026-04-17 07:06:49.372420 | orchestrator | Friday 17 April 2026 07:06:17 +0000 (0:00:02.219) 0:00:10.135 ********** 2026-04-17 07:06:49.372432 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:06:49.372444 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:06:49.372457 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:06:49.372469 | orchestrator | 2026-04-17 07:06:49.372481 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:06:49.372494 | orchestrator | Friday 17 April 2026 07:06:19 +0000 (0:00:01.388) 0:00:11.523 ********** 2026-04-17 07:06:49.372506 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:06:49.372519 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:06:49.372532 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-0 2026-04-17 07:06:49.372544 | orchestrator | 2026-04-17 07:06:49.372557 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-17 07:06:49.372570 | orchestrator | Friday 17 April 2026 07:06:20 +0000 (0:00:01.828) 0:00:13.351 ********** 2026-04-17 07:06:49.372590 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:06:49.372635 | orchestrator | 2026-04-17 07:06:49.372648 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:06:49.372661 | orchestrator | Friday 17 April 2026 07:06:25 +0000 (0:00:04.857) 0:00:18.208 ********** 2026-04-17 07:06:49.372675 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0 2026-04-17 07:06:49.372688 | orchestrator | 2026-04-17 07:06:49.372720 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-17 07:06:49.372732 | orchestrator | Friday 17 April 2026 07:06:27 +0000 (0:00:01.477) 0:00:19.687 ********** 2026-04-17 07:06:49.372743 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:06:49.372754 | orchestrator | 2026-04-17 07:06:49.372764 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-17 07:06:49.372775 | orchestrator | Friday 17 April 2026 07:06:31 +0000 (0:00:04.602) 0:00:24.289 ********** 2026-04-17 07:06:49.372786 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-17 07:06:49.372798 | orchestrator | 2026-04-17 07:06:49.372808 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-17 07:06:49.372819 | orchestrator | Friday 17 April 2026 07:06:34 +0000 (0:00:02.490) 0:00:26.780 ********** 2026-04-17 07:06:49.372830 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-17 07:06:49.372841 | orchestrator | 2026-04-17 07:06:49.372851 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-17 07:06:49.372862 | orchestrator | Friday 17 April 2026 07:06:36 +0000 (0:00:02.001) 0:00:28.781 ********** 2026-04-17 07:06:49.372872 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:06:49.372883 | orchestrator | 2026-04-17 07:06:49.372893 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-17 07:06:49.372904 | orchestrator | Friday 17 April 2026 07:06:37 +0000 (0:00:01.455) 0:00:30.237 ********** 2026-04-17 07:06:49.372914 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:06:49.372925 | orchestrator | 2026-04-17 07:06:49.372936 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-17 07:06:49.372946 | orchestrator | Friday 17 April 2026 07:06:38 +0000 (0:00:01.162) 0:00:31.400 ********** 2026-04-17 07:06:49.372957 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:06:49.372976 | orchestrator | 2026-04-17 07:06:49.372986 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:06:49.372997 | orchestrator | Friday 17 April 2026 07:06:40 +0000 (0:00:01.139) 0:00:32.540 ********** 2026-04-17 07:06:49.373008 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0 2026-04-17 07:06:49.373018 | orchestrator | 2026-04-17 07:06:49.373029 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-17 07:06:49.373039 | orchestrator | Friday 17 April 2026 07:06:41 +0000 (0:00:01.488) 0:00:34.028 ********** 2026-04-17 07:06:49.373052 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:06:49.373064 | orchestrator | 2026-04-17 07:06:49.373075 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-17 07:06:49.373085 | orchestrator | Friday 17 April 2026 07:06:46 +0000 (0:00:04.751) 0:00:38.780 ********** 2026-04-17 07:06:49.373106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:08:43.743497 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.743615 | orchestrator | 2026-04-17 07:08:43.743632 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-17 07:08:43.743646 | orchestrator | Friday 17 April 2026 07:06:50 +0000 (0:00:04.076) 0:00:42.856 ********** 2026-04-17 07:08:43.743662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:08:43.743679 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.743690 | orchestrator | 2026-04-17 07:08:43.743701 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-17 07:08:43.743712 | orchestrator | Friday 17 April 2026 07:06:54 +0000 (0:00:04.007) 0:00:46.864 ********** 2026-04-17 07:08:43.743723 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.743734 | orchestrator | 2026-04-17 07:08:43.743745 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-17 07:08:43.743756 | orchestrator | Friday 17 April 2026 07:06:58 +0000 (0:00:04.400) 0:00:51.265 ********** 2026-04-17 07:08:43.743786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:08:43.743822 | orchestrator | 2026-04-17 07:08:43.743834 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-17 07:08:43.743845 | orchestrator | Friday 17 April 2026 07:07:04 +0000 (0:00:05.183) 0:00:56.449 ********** 2026-04-17 07:08:43.743856 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:08:43.743866 | orchestrator | 2026-04-17 07:08:43.743877 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-17 07:08:43.743888 | orchestrator | Friday 17 April 2026 07:07:10 +0000 (0:00:06.695) 0:01:03.145 ********** 2026-04-17 07:08:43.743898 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.743909 | orchestrator | 2026-04-17 07:08:43.743920 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-17 07:08:43.743931 | orchestrator | Friday 17 April 2026 07:07:14 +0000 (0:00:04.120) 0:01:07.266 ********** 2026-04-17 07:08:43.743941 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.743952 | orchestrator | 2026-04-17 07:08:43.743963 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-17 07:08:43.743974 | orchestrator | Friday 17 April 2026 07:07:19 +0000 (0:00:04.200) 0:01:11.466 ********** 2026-04-17 07:08:43.743984 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.743995 | orchestrator | 2026-04-17 07:08:43.744006 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-17 07:08:43.744018 | orchestrator | Friday 17 April 2026 07:07:23 +0000 (0:00:04.116) 0:01:15.583 ********** 2026-04-17 07:08:43.744032 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.744044 | orchestrator | 2026-04-17 07:08:43.744056 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-17 07:08:43.744069 | orchestrator | Friday 17 April 2026 07:07:24 +0000 (0:00:01.161) 0:01:16.744 ********** 2026-04-17 07:08:43.744081 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 07:08:43.744095 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.744108 | orchestrator | 2026-04-17 07:08:43.744120 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-17 07:08:43.744132 | orchestrator | Friday 17 April 2026 07:07:28 +0000 (0:00:04.298) 0:01:21.042 ********** 2026-04-17 07:08:43.744144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.744157 | orchestrator | 2026-04-17 07:08:43.744169 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-17 07:08:43.744181 | orchestrator | Friday 17 April 2026 07:07:32 +0000 (0:00:04.247) 0:01:25.290 ********** 2026-04-17 07:08:43.744226 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.744239 | orchestrator | 2026-04-17 07:08:43.744251 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:08:43.744264 | orchestrator | Friday 17 April 2026 07:07:37 +0000 (0:00:04.368) 0:01:29.659 ********** 2026-04-17 07:08:43.744277 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:08:43.744289 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:08:43.744302 | orchestrator | included: /ansible/roles/glance/tasks/stop_service.yml for testbed-node-0 2026-04-17 07:08:43.744316 | orchestrator | 2026-04-17 07:08:43.744329 | orchestrator | TASK [glance : Stop glance service] ******************************************** 2026-04-17 07:08:43.744349 | orchestrator | Friday 17 April 2026 07:07:39 +0000 (0:00:01.839) 0:01:31.498 ********** 2026-04-17 07:08:43.744362 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:08:43.744375 | orchestrator | 2026-04-17 07:08:43.744388 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-17 07:08:43.744401 | orchestrator | Friday 17 April 2026 07:07:52 +0000 (0:00:13.090) 0:01:44.588 ********** 2026-04-17 07:08:43.744412 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:08:43.744423 | orchestrator | 2026-04-17 07:08:43.744434 | orchestrator | TASK [glance : Running Glance database expand container] *********************** 2026-04-17 07:08:43.744444 | orchestrator | Friday 17 April 2026 07:07:55 +0000 (0:00:03.403) 0:01:47.992 ********** 2026-04-17 07:08:43.744455 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:08:43.744466 | orchestrator | 2026-04-17 07:08:43.744477 | orchestrator | TASK [glance : Running Glance database migrate container] ********************** 2026-04-17 07:08:43.744488 | orchestrator | Friday 17 April 2026 07:08:22 +0000 (0:00:26.872) 0:02:14.864 ********** 2026-04-17 07:08:43.744499 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:08:43.744510 | orchestrator | 2026-04-17 07:08:43.744520 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:08:43.744531 | orchestrator | Friday 17 April 2026 07:08:38 +0000 (0:00:16.092) 0:02:30.957 ********** 2026-04-17 07:08:43.744542 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:08:43.744553 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-1, testbed-node-2 2026-04-17 07:08:43.744564 | orchestrator | 2026-04-17 07:08:43.744574 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-17 07:08:43.744585 | orchestrator | Friday 17 April 2026 07:08:39 +0000 (0:00:01.423) 0:02:32.380 ********** 2026-04-17 07:08:43.744607 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:09:09.197854 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:09:09.197992 | orchestrator | 2026-04-17 07:09:09.198011 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:09:09.198095 | orchestrator | Friday 17 April 2026 07:08:45 +0000 (0:00:05.179) 0:02:37.560 ********** 2026-04-17 07:09:09.198108 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-1, testbed-node-2 2026-04-17 07:09:09.198120 | orchestrator | 2026-04-17 07:09:09.198131 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-17 07:09:09.198168 | orchestrator | Friday 17 April 2026 07:08:46 +0000 (0:00:01.236) 0:02:38.796 ********** 2026-04-17 07:09:09.198180 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:09:09.198192 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:09:09.198203 | orchestrator | 2026-04-17 07:09:09.198214 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-17 07:09:09.198224 | orchestrator | Friday 17 April 2026 07:08:51 +0000 (0:00:04.766) 0:02:43.562 ********** 2026-04-17 07:09:09.198235 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-17 07:09:09.198248 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-17 07:09:09.198259 | orchestrator | 2026-04-17 07:09:09.198270 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-17 07:09:09.198280 | orchestrator | Friday 17 April 2026 07:08:53 +0000 (0:00:02.390) 0:02:45.953 ********** 2026-04-17 07:09:09.198291 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-17 07:09:09.198302 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-17 07:09:09.198313 | orchestrator | 2026-04-17 07:09:09.198324 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-17 07:09:09.198335 | orchestrator | Friday 17 April 2026 07:08:55 +0000 (0:00:02.071) 0:02:48.024 ********** 2026-04-17 07:09:09.198346 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:09:09.198356 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:09:09.198367 | orchestrator | 2026-04-17 07:09:09.198378 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-17 07:09:09.198392 | orchestrator | Friday 17 April 2026 07:08:57 +0000 (0:00:01.818) 0:02:49.843 ********** 2026-04-17 07:09:09.198406 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:09:09.198419 | orchestrator | 2026-04-17 07:09:09.198431 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-17 07:09:09.198445 | orchestrator | Friday 17 April 2026 07:08:58 +0000 (0:00:01.149) 0:02:50.993 ********** 2026-04-17 07:09:09.198466 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:09:09.198479 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:09:09.198493 | orchestrator | 2026-04-17 07:09:09.198506 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:09:09.198519 | orchestrator | Friday 17 April 2026 07:08:59 +0000 (0:00:01.226) 0:02:52.219 ********** 2026-04-17 07:09:09.198532 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-1, testbed-node-2 2026-04-17 07:09:09.198545 | orchestrator | 2026-04-17 07:09:09.198577 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-17 07:09:09.198590 | orchestrator | Friday 17 April 2026 07:09:01 +0000 (0:00:01.238) 0:02:53.457 ********** 2026-04-17 07:09:09.198606 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:09:09.198623 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:09:09.198645 | orchestrator | 2026-04-17 07:09:09.198658 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-17 07:09:09.198671 | orchestrator | Friday 17 April 2026 07:09:06 +0000 (0:00:05.054) 0:02:58.512 ********** 2026-04-17 07:09:09.198695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:09:23.132586 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:09:23.132706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:09:23.132749 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:09:23.132763 | orchestrator | 2026-04-17 07:09:23.132775 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-17 07:09:23.132787 | orchestrator | Friday 17 April 2026 07:09:10 +0000 (0:00:04.407) 0:03:02.920 ********** 2026-04-17 07:09:23.132799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:09:23.132811 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:09:23.132841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:09:23.132854 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:09:23.132873 | orchestrator | 2026-04-17 07:09:23.132884 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-17 07:09:23.132895 | orchestrator | Friday 17 April 2026 07:09:14 +0000 (0:00:04.006) 0:03:06.926 ********** 2026-04-17 07:09:23.132906 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:09:23.132917 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:09:23.132927 | orchestrator | 2026-04-17 07:09:23.132938 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-17 07:09:23.132949 | orchestrator | Friday 17 April 2026 07:09:19 +0000 (0:00:04.539) 0:03:11.465 ********** 2026-04-17 07:09:23.132960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:09:23.132983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:10:09.269223 | orchestrator | 2026-04-17 07:10:09.269324 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-17 07:10:09.269340 | orchestrator | Friday 17 April 2026 07:09:24 +0000 (0:00:05.135) 0:03:16.601 ********** 2026-04-17 07:10:09.269353 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:10:09.269365 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:10:09.269376 | orchestrator | 2026-04-17 07:10:09.269387 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-17 07:10:09.269398 | orchestrator | Friday 17 April 2026 07:09:31 +0000 (0:00:06.948) 0:03:23.549 ********** 2026-04-17 07:10:09.269408 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:10:09.269419 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:10:09.269430 | orchestrator | 2026-04-17 07:10:09.269441 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-17 07:10:09.269451 | orchestrator | Friday 17 April 2026 07:09:35 +0000 (0:00:04.405) 0:03:27.955 ********** 2026-04-17 07:10:09.269462 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:10:09.269473 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:10:09.269484 | orchestrator | 2026-04-17 07:10:09.269494 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-17 07:10:09.269505 | orchestrator | Friday 17 April 2026 07:09:39 +0000 (0:00:04.360) 0:03:32.315 ********** 2026-04-17 07:10:09.269516 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:10:09.269526 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:10:09.269537 | orchestrator | 2026-04-17 07:10:09.269548 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-17 07:10:09.269559 | orchestrator | Friday 17 April 2026 07:09:44 +0000 (0:00:04.374) 0:03:36.690 ********** 2026-04-17 07:10:09.269569 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:10:09.269580 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:10:09.269591 | orchestrator | 2026-04-17 07:10:09.269601 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-17 07:10:09.269612 | orchestrator | Friday 17 April 2026 07:09:45 +0000 (0:00:01.266) 0:03:37.957 ********** 2026-04-17 07:10:09.269623 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 07:10:09.269635 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:10:09.269646 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 07:10:09.269657 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:10:09.269668 | orchestrator | 2026-04-17 07:10:09.269678 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-17 07:10:09.269689 | orchestrator | Friday 17 April 2026 07:09:50 +0000 (0:00:04.598) 0:03:42.555 ********** 2026-04-17 07:10:09.269700 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:10:09.269711 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:10:09.269722 | orchestrator | 2026-04-17 07:10:09.269733 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-17 07:10:09.269744 | orchestrator | Friday 17 April 2026 07:09:54 +0000 (0:00:04.648) 0:03:47.203 ********** 2026-04-17 07:10:09.269754 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:10:09.269765 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:10:09.269778 | orchestrator | 2026-04-17 07:10:09.269790 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-17 07:10:09.269803 | orchestrator | Friday 17 April 2026 07:09:59 +0000 (0:00:04.752) 0:03:51.956 ********** 2026-04-17 07:10:09.269821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:10:09.269880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:10:09.269897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 07:10:09.269921 | orchestrator | 2026-04-17 07:10:09.269934 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-17 07:10:09.269948 | orchestrator | Friday 17 April 2026 07:10:04 +0000 (0:00:05.268) 0:03:57.225 ********** 2026-04-17 07:10:09.269961 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:10:09.269974 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:10:09.269987 | orchestrator | } 2026-04-17 07:10:09.270000 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:10:09.270012 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:10:09.270102 | orchestrator | } 2026-04-17 07:10:09.270116 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:10:09.270128 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:10:09.270141 | orchestrator | } 2026-04-17 07:10:09.270152 | orchestrator | 2026-04-17 07:10:09.270163 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:10:09.270174 | orchestrator | Friday 17 April 2026 07:10:06 +0000 (0:00:01.424) 0:03:58.650 ********** 2026-04-17 07:10:09.270197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:11:17.889832 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:11:17.889950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:11:17.890084 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:11:17.890105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 07:11:17.890118 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:11:17.890130 | orchestrator | 2026-04-17 07:11:17.890174 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 07:11:17.890200 | orchestrator | Friday 17 April 2026 07:10:10 +0000 (0:00:04.669) 0:04:03.319 ********** 2026-04-17 07:11:17.890212 | orchestrator | 2026-04-17 07:11:17.890223 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 07:11:17.890234 | orchestrator | Friday 17 April 2026 07:10:11 +0000 (0:00:00.448) 0:04:03.767 ********** 2026-04-17 07:11:17.890245 | orchestrator | 2026-04-17 07:11:17.890256 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 07:11:17.890285 | orchestrator | Friday 17 April 2026 07:10:11 +0000 (0:00:00.455) 0:04:04.222 ********** 2026-04-17 07:11:17.890308 | orchestrator | 2026-04-17 07:11:17.890319 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-17 07:11:17.890330 | orchestrator | Friday 17 April 2026 07:10:12 +0000 (0:00:00.791) 0:04:05.014 ********** 2026-04-17 07:11:17.890341 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:11:17.890352 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:11:17.890363 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:11:17.890376 | orchestrator | 2026-04-17 07:11:17.890388 | orchestrator | TASK [glance : Running Glance database contract container] ********************* 2026-04-17 07:11:17.890400 | orchestrator | Friday 17 April 2026 07:10:53 +0000 (0:00:41.281) 0:04:46.296 ********** 2026-04-17 07:11:17.890412 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:11:17.890425 | orchestrator | 2026-04-17 07:11:17.890436 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-17 07:11:17.890449 | orchestrator | Friday 17 April 2026 07:11:10 +0000 (0:00:16.978) 0:05:03.274 ********** 2026-04-17 07:11:17.890461 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:11:17.890473 | orchestrator | 2026-04-17 07:11:17.890485 | orchestrator | TASK [glance : Finish Glance upgrade] ****************************************** 2026-04-17 07:11:17.890497 | orchestrator | Friday 17 April 2026 07:11:14 +0000 (0:00:03.234) 0:05:06.509 ********** 2026-04-17 07:11:17.890509 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:11:17.890522 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:11:17.890534 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:11:17.890546 | orchestrator | 2026-04-17 07:11:17.890558 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 07:11:17.890571 | orchestrator | Friday 17 April 2026 07:11:15 +0000 (0:00:01.478) 0:05:07.987 ********** 2026-04-17 07:11:17.890583 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:11:17.890595 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:11:17.890608 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:11:17.890620 | orchestrator | 2026-04-17 07:11:17.890632 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:11:17.890646 | orchestrator | testbed-node-0 : ok=27  changed=11  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-17 07:11:17.890660 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-17 07:11:17.890673 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-17 07:11:17.890685 | orchestrator | 2026-04-17 07:11:17.890698 | orchestrator | 2026-04-17 07:11:17.890709 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:11:17.890722 | orchestrator | Friday 17 April 2026 07:11:17 +0000 (0:00:01.857) 0:05:09.844 ********** 2026-04-17 07:11:17.890735 | orchestrator | =============================================================================== 2026-04-17 07:11:17.890746 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.28s 2026-04-17 07:11:17.890757 | orchestrator | glance : Running Glance database expand container ---------------------- 26.87s 2026-04-17 07:11:17.890768 | orchestrator | glance : Running Glance database contract container -------------------- 16.98s 2026-04-17 07:11:17.890778 | orchestrator | glance : Running Glance database migrate container --------------------- 16.09s 2026-04-17 07:11:17.890789 | orchestrator | glance : Stop glance service ------------------------------------------- 13.09s 2026-04-17 07:11:17.890800 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.95s 2026-04-17 07:11:17.890811 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.70s 2026-04-17 07:11:17.890821 | orchestrator | service-check-containers : glance | Check containers -------------------- 5.27s 2026-04-17 07:11:17.890832 | orchestrator | glance : Copying over config.json files for services -------------------- 5.18s 2026-04-17 07:11:17.890849 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.18s 2026-04-17 07:11:17.890860 | orchestrator | glance : Copying over config.json files for services -------------------- 5.14s 2026-04-17 07:11:17.890871 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.06s 2026-04-17 07:11:17.890882 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.86s 2026-04-17 07:11:17.890892 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.77s 2026-04-17 07:11:17.890903 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.75s 2026-04-17 07:11:17.890914 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.75s 2026-04-17 07:11:17.890924 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.67s 2026-04-17 07:11:17.890935 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.65s 2026-04-17 07:11:17.890946 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.60s 2026-04-17 07:11:17.890957 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.60s 2026-04-17 07:11:18.117866 | orchestrator | + osism apply -a upgrade cinder 2026-04-17 07:11:19.428934 | orchestrator | 2026-04-17 07:11:19 | INFO  | Prepare task for execution of cinder. 2026-04-17 07:11:19.515546 | orchestrator | 2026-04-17 07:11:19 | INFO  | Task 4d8bf94b-f91b-41f4-b4c6-adcda5c9a476 (cinder) was prepared for execution. 2026-04-17 07:11:19.515641 | orchestrator | 2026-04-17 07:11:19 | INFO  | It takes a moment until task 4d8bf94b-f91b-41f4-b4c6-adcda5c9a476 (cinder) has been started and output is visible here. 2026-04-17 07:11:43.364497 | orchestrator | 2026-04-17 07:11:43.364615 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:11:43.364633 | orchestrator | 2026-04-17 07:11:43.364645 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:11:43.364657 | orchestrator | Friday 17 April 2026 07:11:24 +0000 (0:00:01.818) 0:00:01.819 ********** 2026-04-17 07:11:43.364668 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:11:43.364680 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:11:43.364690 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:11:43.364701 | orchestrator | 2026-04-17 07:11:43.364712 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:11:43.364723 | orchestrator | Friday 17 April 2026 07:11:26 +0000 (0:00:01.768) 0:00:03.587 ********** 2026-04-17 07:11:43.364734 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-17 07:11:43.364746 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-17 07:11:43.364758 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-17 07:11:43.364769 | orchestrator | 2026-04-17 07:11:43.364780 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-17 07:11:43.364791 | orchestrator | 2026-04-17 07:11:43.364803 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 07:11:43.364814 | orchestrator | Friday 17 April 2026 07:11:28 +0000 (0:00:02.366) 0:00:05.954 ********** 2026-04-17 07:11:43.364825 | orchestrator | included: /ansible/roles/cinder/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:11:43.364837 | orchestrator | 2026-04-17 07:11:43.364848 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 07:11:43.364859 | orchestrator | Friday 17 April 2026 07:11:32 +0000 (0:00:03.500) 0:00:09.454 ********** 2026-04-17 07:11:43.364870 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:11:43.364881 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:11:43.364893 | orchestrator | included: /ansible/roles/cinder/tasks/config.yml for testbed-node-0 2026-04-17 07:11:43.364904 | orchestrator | 2026-04-17 07:11:43.364915 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-17 07:11:43.364925 | orchestrator | Friday 17 April 2026 07:11:34 +0000 (0:00:01.971) 0:00:11.426 ********** 2026-04-17 07:11:43.364994 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:11:43.365013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:11:43.365027 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:11:43.365061 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:11:43.365075 | orchestrator | 2026-04-17 07:11:43.365088 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 07:11:43.365101 | orchestrator | Friday 17 April 2026 07:11:37 +0000 (0:00:03.339) 0:00:14.765 ********** 2026-04-17 07:11:43.365113 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:11:43.365126 | orchestrator | 2026-04-17 07:11:43.365139 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 07:11:43.365152 | orchestrator | Friday 17 April 2026 07:11:38 +0000 (0:00:01.129) 0:00:15.895 ********** 2026-04-17 07:11:43.365164 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0 2026-04-17 07:11:43.365177 | orchestrator | 2026-04-17 07:11:43.365188 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-17 07:11:43.365201 | orchestrator | Friday 17 April 2026 07:11:40 +0000 (0:00:01.451) 0:00:17.347 ********** 2026-04-17 07:11:43.365222 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-17 07:11:43.365235 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-17 07:11:43.365247 | orchestrator | 2026-04-17 07:11:43.365260 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-17 07:11:43.365272 | orchestrator | Friday 17 April 2026 07:11:42 +0000 (0:00:02.788) 0:00:20.135 ********** 2026-04-17 07:11:43.365286 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-17 07:11:43.365301 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-17 07:11:43.365325 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-17 07:12:03.615087 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-17 07:12:03.615245 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-17 07:12:03.615279 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-17 07:12:03.615300 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-17 07:12:03.615336 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-17 07:12:03.615350 | orchestrator | 2026-04-17 07:12:03.615363 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-17 07:12:03.615375 | orchestrator | Friday 17 April 2026 07:11:49 +0000 (0:00:06.260) 0:00:26.395 ********** 2026-04-17 07:12:03.615395 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-17 07:12:03.615408 | orchestrator | 2026-04-17 07:12:03.615419 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-17 07:12:03.615429 | orchestrator | Friday 17 April 2026 07:11:51 +0000 (0:00:02.303) 0:00:28.699 ********** 2026-04-17 07:12:03.615440 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-17 07:12:03.615452 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-17 07:12:03.615465 | orchestrator | 2026-04-17 07:12:03.615476 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-17 07:12:03.615487 | orchestrator | Friday 17 April 2026 07:11:55 +0000 (0:00:03.581) 0:00:32.281 ********** 2026-04-17 07:12:03.615498 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-17 07:12:03.615509 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-17 07:12:03.615519 | orchestrator | 2026-04-17 07:12:03.615530 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-17 07:12:03.615540 | orchestrator | Friday 17 April 2026 07:11:56 +0000 (0:00:01.868) 0:00:34.149 ********** 2026-04-17 07:12:03.615553 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:12:03.615566 | orchestrator | 2026-04-17 07:12:03.615579 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-17 07:12:03.615592 | orchestrator | Friday 17 April 2026 07:11:58 +0000 (0:00:01.150) 0:00:35.299 ********** 2026-04-17 07:12:03.615604 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:12:03.615615 | orchestrator | 2026-04-17 07:12:03.615628 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 07:12:03.615640 | orchestrator | Friday 17 April 2026 07:11:59 +0000 (0:00:01.104) 0:00:36.404 ********** 2026-04-17 07:12:03.615653 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0 2026-04-17 07:12:03.615665 | orchestrator | 2026-04-17 07:12:03.615678 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-17 07:12:03.615690 | orchestrator | Friday 17 April 2026 07:12:00 +0000 (0:00:01.513) 0:00:37.917 ********** 2026-04-17 07:12:03.615705 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:12:03.615722 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:03.615751 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:10.360479 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:10.360591 | orchestrator | 2026-04-17 07:12:10.360609 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-17 07:12:10.360622 | orchestrator | Friday 17 April 2026 07:12:05 +0000 (0:00:04.752) 0:00:42.669 ********** 2026-04-17 07:12:10.360638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:12:10.360653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:12:10.360667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:12:10.360702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:12:10.360715 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:12:10.360728 | orchestrator | 2026-04-17 07:12:10.360754 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-17 07:12:10.360766 | orchestrator | Friday 17 April 2026 07:12:07 +0000 (0:00:01.663) 0:00:44.333 ********** 2026-04-17 07:12:10.360778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:12:10.360790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:12:10.360802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:12:10.360814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:12:10.360833 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:12:10.360844 | orchestrator | 2026-04-17 07:12:10.360855 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-17 07:12:10.360865 | orchestrator | Friday 17 April 2026 07:12:08 +0000 (0:00:01.726) 0:00:46.060 ********** 2026-04-17 07:12:10.360885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:12:37.577717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:37.577832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:37.577850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:37.577886 | orchestrator | 2026-04-17 07:12:37.577899 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-17 07:12:37.577911 | orchestrator | Friday 17 April 2026 07:12:14 +0000 (0:00:05.257) 0:00:51.317 ********** 2026-04-17 07:12:37.577983 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-17 07:12:37.577996 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:12:37.578008 | orchestrator | 2026-04-17 07:12:37.578077 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-17 07:12:37.578090 | orchestrator | Friday 17 April 2026 07:12:15 +0000 (0:00:01.504) 0:00:52.821 ********** 2026-04-17 07:12:37.578101 | orchestrator | included: service-uwsgi-config for testbed-node-0 2026-04-17 07:12:37.578112 | orchestrator | 2026-04-17 07:12:37.578123 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-17 07:12:37.578168 | orchestrator | Friday 17 April 2026 07:12:17 +0000 (0:00:01.786) 0:00:54.608 ********** 2026-04-17 07:12:37.578180 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:12:37.578191 | orchestrator | 2026-04-17 07:12:37.578202 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-17 07:12:37.578213 | orchestrator | Friday 17 April 2026 07:12:19 +0000 (0:00:02.535) 0:00:57.143 ********** 2026-04-17 07:12:37.578227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:12:37.578269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:37.578292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:37.578312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:12:37.578346 | orchestrator | 2026-04-17 07:12:37.578361 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-17 07:12:37.578379 | orchestrator | Friday 17 April 2026 07:12:32 +0000 (0:00:12.461) 0:01:09.605 ********** 2026-04-17 07:12:37.578398 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:12:37.578417 | orchestrator | 2026-04-17 07:12:37.578436 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-17 07:12:37.578453 | orchestrator | Friday 17 April 2026 07:12:34 +0000 (0:00:02.147) 0:01:11.752 ********** 2026-04-17 07:12:37.578471 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:12:37.578482 | orchestrator | 2026-04-17 07:12:37.578493 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-17 07:12:37.578503 | orchestrator | Friday 17 April 2026 07:12:36 +0000 (0:00:02.386) 0:01:14.138 ********** 2026-04-17 07:12:37.578516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:12:37.578538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:13:20.169542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:13:20.169663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:13:20.169705 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:13:20.169721 | orchestrator | 2026-04-17 07:13:20.169733 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-17 07:13:20.169745 | orchestrator | Friday 17 April 2026 07:12:38 +0000 (0:00:01.683) 0:01:15.822 ********** 2026-04-17 07:13:20.169756 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:13:20.169767 | orchestrator | 2026-04-17 07:13:20.169778 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-17 07:13:20.169788 | orchestrator | Friday 17 April 2026 07:12:40 +0000 (0:00:01.465) 0:01:17.287 ********** 2026-04-17 07:13:20.169799 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:13:20.169810 | orchestrator | 2026-04-17 07:13:20.169821 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-17 07:13:20.169831 | orchestrator | Friday 17 April 2026 07:13:18 +0000 (0:00:38.279) 0:01:55.566 ********** 2026-04-17 07:13:20.169846 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:13:20.169908 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:20.169943 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:13:20.169966 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:13:20.169979 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:20.169992 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:20.170004 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:20.170091 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:27.914241 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:27.914357 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:27.914375 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:27.914388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:27.914400 | orchestrator | 2026-04-17 07:13:27.914414 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 07:13:27.914426 | orchestrator | Friday 17 April 2026 07:13:21 +0000 (0:00:03.375) 0:01:58.942 ********** 2026-04-17 07:13:27.914438 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:13:27.914450 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:13:27.914461 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:13:27.914471 | orchestrator | 2026-04-17 07:13:27.914483 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 07:13:27.914493 | orchestrator | Friday 17 April 2026 07:13:23 +0000 (0:00:01.423) 0:02:00.366 ********** 2026-04-17 07:13:27.914506 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:13:27.914517 | orchestrator | 2026-04-17 07:13:27.914528 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-17 07:13:27.914539 | orchestrator | Friday 17 April 2026 07:13:24 +0000 (0:00:01.542) 0:02:01.909 ********** 2026-04-17 07:13:27.914575 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-17 07:13:27.914588 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-17 07:13:27.914598 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-17 07:13:27.914609 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-17 07:13:27.914620 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-17 07:13:27.914630 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-17 07:13:27.914641 | orchestrator | 2026-04-17 07:13:27.914652 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-17 07:13:27.914680 | orchestrator | Friday 17 April 2026 07:13:27 +0000 (0:00:02.661) 0:02:04.571 ********** 2026-04-17 07:13:27.914696 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-17 07:13:27.914712 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-17 07:13:27.914725 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-17 07:13:27.914738 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-17 07:13:27.914770 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-17 07:13:29.243081 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-17 07:13:29.243185 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-17 07:13:29.243203 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-17 07:13:29.243241 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-17 07:13:29.243273 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-17 07:13:29.243285 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-17 07:13:29.243296 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-17 07:13:29.243315 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-17 07:13:29.243335 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-17 07:13:32.601006 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-17 07:13:32.601090 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-17 07:13:32.601100 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-17 07:13:32.601129 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-17 07:13:32.601149 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-17 07:13:32.601159 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-17 07:13:32.601166 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-17 07:13:32.601178 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-17 07:13:32.601185 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-17 07:13:32.601196 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-17 07:13:49.332918 | orchestrator | 2026-04-17 07:13:49.333070 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-17 07:13:49.333089 | orchestrator | Friday 17 April 2026 07:13:33 +0000 (0:00:06.399) 0:02:10.970 ********** 2026-04-17 07:13:49.333102 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-17 07:13:49.333115 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-17 07:13:49.333127 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-17 07:13:49.333138 | orchestrator | 2026-04-17 07:13:49.333149 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-17 07:13:49.333160 | orchestrator | Friday 17 April 2026 07:13:36 +0000 (0:00:02.729) 0:02:13.700 ********** 2026-04-17 07:13:49.333171 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-17 07:13:49.333182 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-17 07:13:49.333222 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-17 07:13:49.333235 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-17 07:13:49.333248 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-17 07:13:49.333259 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-17 07:13:49.333270 | orchestrator | 2026-04-17 07:13:49.333281 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-17 07:13:49.333292 | orchestrator | Friday 17 April 2026 07:13:40 +0000 (0:00:03.752) 0:02:17.452 ********** 2026-04-17 07:13:49.333304 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-17 07:13:49.333316 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-17 07:13:49.333327 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-17 07:13:49.333338 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-17 07:13:49.333349 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-17 07:13:49.333360 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-17 07:13:49.333371 | orchestrator | 2026-04-17 07:13:49.333384 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-17 07:13:49.333397 | orchestrator | Friday 17 April 2026 07:13:42 +0000 (0:00:02.086) 0:02:19.539 ********** 2026-04-17 07:13:49.333410 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:13:49.333423 | orchestrator | 2026-04-17 07:13:49.333435 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-17 07:13:49.333447 | orchestrator | Friday 17 April 2026 07:13:43 +0000 (0:00:01.147) 0:02:20.686 ********** 2026-04-17 07:13:49.333460 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:13:49.333472 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:13:49.333485 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:13:49.333497 | orchestrator | 2026-04-17 07:13:49.333509 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 07:13:49.333522 | orchestrator | Friday 17 April 2026 07:13:45 +0000 (0:00:01.588) 0:02:22.275 ********** 2026-04-17 07:13:49.333535 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:13:49.333549 | orchestrator | 2026-04-17 07:13:49.333561 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-17 07:13:49.333573 | orchestrator | Friday 17 April 2026 07:13:46 +0000 (0:00:01.336) 0:02:23.611 ********** 2026-04-17 07:13:49.333612 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:13:49.333632 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:13:49.333656 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:13:49.333671 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:49.333684 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:49.333697 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:49.333721 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:52.436445 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:52.436579 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:52.436595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:52.436616 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:52.436626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:13:52.436659 | orchestrator | 2026-04-17 07:13:52.436671 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-17 07:13:52.436737 | orchestrator | Friday 17 April 2026 07:13:51 +0000 (0:00:05.201) 0:02:28.813 ********** 2026-04-17 07:13:52.436771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:13:52.436783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:13:52.436795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:13:52.436805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:13:52.436814 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:13:52.436863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:13:52.436890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039330 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:13:54.039345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:13:54.039381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039431 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:13:54.039441 | orchestrator | 2026-04-17 07:13:54.039453 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-17 07:13:54.039471 | orchestrator | Friday 17 April 2026 07:13:53 +0000 (0:00:01.924) 0:02:30.737 ********** 2026-04-17 07:13:54.039489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:13:54.039508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:13:54.039569 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:13:54.039597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:13:57.015246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:13:57.015374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:13:57.015415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:13:57.015429 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:13:57.015445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:13:57.015458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:13:57.015489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:13:57.015502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:13:57.015521 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:13:57.015532 | orchestrator | 2026-04-17 07:13:57.015545 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-17 07:13:57.015557 | orchestrator | Friday 17 April 2026 07:13:55 +0000 (0:00:01.691) 0:02:32.429 ********** 2026-04-17 07:13:57.015569 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:13:57.015581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:13:57.015603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:14:10.628711 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.628938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.628971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.628991 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.629010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.629053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.629073 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.629095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.629106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:10.629117 | orchestrator | 2026-04-17 07:14:10.629128 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-17 07:14:10.629139 | orchestrator | Friday 17 April 2026 07:14:01 +0000 (0:00:05.763) 0:02:38.192 ********** 2026-04-17 07:14:10.629149 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-17 07:14:10.629159 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:14:10.629170 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-17 07:14:10.629180 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:14:10.629189 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-17 07:14:10.629199 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:14:10.629209 | orchestrator | 2026-04-17 07:14:10.629220 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-17 07:14:10.629231 | orchestrator | Friday 17 April 2026 07:14:02 +0000 (0:00:01.719) 0:02:39.912 ********** 2026-04-17 07:14:10.629242 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:14:10.629253 | orchestrator | 2026-04-17 07:14:10.629264 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-17 07:14:10.629275 | orchestrator | Friday 17 April 2026 07:14:04 +0000 (0:00:01.674) 0:02:41.586 ********** 2026-04-17 07:14:10.629286 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:14:10.629298 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:14:10.629309 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:14:10.629320 | orchestrator | 2026-04-17 07:14:10.629330 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-17 07:14:10.629342 | orchestrator | Friday 17 April 2026 07:14:07 +0000 (0:00:03.093) 0:02:44.680 ********** 2026-04-17 07:14:10.629364 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:14:19.156840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:14:19.156965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:14:19.156986 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:19.156999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:19.157034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:19.157066 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:19.157080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:19.157092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:19.157104 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:19.157162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:19.157182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:26.564745 | orchestrator | 2026-04-17 07:14:26.564915 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-17 07:14:26.564934 | orchestrator | Friday 17 April 2026 07:14:20 +0000 (0:00:12.762) 0:02:57.443 ********** 2026-04-17 07:14:26.564946 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:14:26.564958 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:14:26.564970 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:14:26.564980 | orchestrator | 2026-04-17 07:14:26.564992 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-17 07:14:26.565003 | orchestrator | Friday 17 April 2026 07:14:23 +0000 (0:00:02.886) 0:03:00.329 ********** 2026-04-17 07:14:26.565013 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:14:26.565024 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:14:26.565036 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:14:26.565047 | orchestrator | 2026-04-17 07:14:26.565058 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-17 07:14:26.565069 | orchestrator | Friday 17 April 2026 07:14:25 +0000 (0:00:02.821) 0:03:03.151 ********** 2026-04-17 07:14:26.565086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:14:26.565104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:14:26.565143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:14:26.565156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:14:26.565169 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:14:26.565200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:14:26.565213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:14:26.565225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:14:26.565245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:14:26.565257 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:14:26.565269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:14:26.565292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:14:32.669178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:14:32.669289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:14:32.669331 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:14:32.669346 | orchestrator | 2026-04-17 07:14:32.669357 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-17 07:14:32.669369 | orchestrator | Friday 17 April 2026 07:14:27 +0000 (0:00:01.690) 0:03:04.842 ********** 2026-04-17 07:14:32.669380 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:14:32.669391 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:14:32.669401 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:14:32.669412 | orchestrator | 2026-04-17 07:14:32.669423 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-17 07:14:32.669433 | orchestrator | Friday 17 April 2026 07:14:29 +0000 (0:00:01.770) 0:03:06.612 ********** 2026-04-17 07:14:32.669448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:14:32.669462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:14:32.669494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:14:32.669548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:32.669562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:32.669574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:32.669586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:32.669607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:36.908860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:36.909013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:36.909033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:36.909045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:14:36.909057 | orchestrator | 2026-04-17 07:14:36.909116 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-17 07:14:36.909133 | orchestrator | Friday 17 April 2026 07:14:34 +0000 (0:00:05.157) 0:03:11.770 ********** 2026-04-17 07:14:36.909146 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:14:36.909159 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:14:36.909171 | orchestrator | } 2026-04-17 07:14:36.909183 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:14:36.909194 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:14:36.909204 | orchestrator | } 2026-04-17 07:14:36.909215 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:14:36.909226 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:14:36.909237 | orchestrator | } 2026-04-17 07:14:36.909248 | orchestrator | 2026-04-17 07:14:36.909259 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:14:36.909270 | orchestrator | Friday 17 April 2026 07:14:36 +0000 (0:00:01.822) 0:03:13.593 ********** 2026-04-17 07:14:36.909303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:14:36.909340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:14:36.909362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:14:36.909384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:14:36.909405 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:14:36.909426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:14:36.909451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:16:54.695309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:16:54.695460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:16:54.695488 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:16:54.695514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:16:54.695538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:16:54.695559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 07:16:54.695716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 07:16:54.695746 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:16:54.695759 | orchestrator | 2026-04-17 07:16:54.695771 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 07:16:54.695783 | orchestrator | Friday 17 April 2026 07:14:38 +0000 (0:00:01.753) 0:03:15.347 ********** 2026-04-17 07:16:54.695794 | orchestrator | 2026-04-17 07:16:54.695804 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 07:16:54.695817 | orchestrator | Friday 17 April 2026 07:14:38 +0000 (0:00:00.445) 0:03:15.793 ********** 2026-04-17 07:16:54.695829 | orchestrator | 2026-04-17 07:16:54.695842 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 07:16:54.695855 | orchestrator | Friday 17 April 2026 07:14:39 +0000 (0:00:00.639) 0:03:16.432 ********** 2026-04-17 07:16:54.695867 | orchestrator | 2026-04-17 07:16:54.695879 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-17 07:16:54.695892 | orchestrator | Friday 17 April 2026 07:14:40 +0000 (0:00:00.812) 0:03:17.245 ********** 2026-04-17 07:16:54.695904 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:16:54.695917 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:16:54.695929 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:16:54.695941 | orchestrator | 2026-04-17 07:16:54.695952 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-17 07:16:54.695962 | orchestrator | Friday 17 April 2026 07:15:13 +0000 (0:00:33.206) 0:03:50.451 ********** 2026-04-17 07:16:54.695973 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:16:54.695983 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:16:54.695994 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:16:54.696004 | orchestrator | 2026-04-17 07:16:54.696015 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-17 07:16:54.696026 | orchestrator | Friday 17 April 2026 07:15:26 +0000 (0:00:12.811) 0:04:03.263 ********** 2026-04-17 07:16:54.696036 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:16:54.696046 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:16:54.696057 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:16:54.696068 | orchestrator | 2026-04-17 07:16:54.696078 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-17 07:16:54.696089 | orchestrator | Friday 17 April 2026 07:16:02 +0000 (0:00:36.532) 0:04:39.796 ********** 2026-04-17 07:16:54.696099 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:16:54.696110 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:16:54.696120 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:16:54.696131 | orchestrator | 2026-04-17 07:16:54.696141 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-17 07:16:54.696153 | orchestrator | Friday 17 April 2026 07:16:17 +0000 (0:00:14.417) 0:04:54.214 ********** 2026-04-17 07:16:54.696164 | orchestrator | Pausing for 30 seconds 2026-04-17 07:16:54.696187 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:16:54.696197 | orchestrator | 2026-04-17 07:16:54.696208 | orchestrator | TASK [cinder : Reload cinder services to remove RPC version pin] *************** 2026-04-17 07:16:54.696219 | orchestrator | Friday 17 April 2026 07:16:48 +0000 (0:00:31.519) 0:05:25.733 ********** 2026-04-17 07:16:54.696231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:16:54.696253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:17:33.543475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:17:33.543708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:33.543900 | orchestrator | 2026-04-17 07:17:33.543913 | orchestrator | TASK [cinder : Running Cinder online schema migration] ************************* 2026-04-17 07:17:33.543925 | orchestrator | Friday 17 April 2026 07:17:17 +0000 (0:00:29.311) 0:05:55.044 ********** 2026-04-17 07:17:33.543936 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:17:33.543948 | orchestrator | 2026-04-17 07:17:33.543959 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:17:33.543971 | orchestrator | testbed-node-0 : ok=44  changed=13  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 07:17:33.543985 | orchestrator | testbed-node-1 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 07:17:33.543998 | orchestrator | testbed-node-2 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 07:17:33.544010 | orchestrator | 2026-04-17 07:17:33.544022 | orchestrator | 2026-04-17 07:17:33.544034 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:17:33.544053 | orchestrator | Friday 17 April 2026 07:17:33 +0000 (0:00:15.667) 0:06:10.712 ********** 2026-04-17 07:17:34.007187 | orchestrator | =============================================================================== 2026-04-17 07:17:34.007306 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 38.28s 2026-04-17 07:17:34.007320 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 36.53s 2026-04-17 07:17:34.007330 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 33.21s 2026-04-17 07:17:34.007338 | orchestrator | cinder : Wait for cinder services to update service versions ----------- 31.52s 2026-04-17 07:17:34.007347 | orchestrator | cinder : Reload cinder services to remove RPC version pin -------------- 29.31s 2026-04-17 07:17:34.007356 | orchestrator | cinder : Running Cinder online schema migration ------------------------ 15.67s 2026-04-17 07:17:34.007364 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 14.42s 2026-04-17 07:17:34.007373 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.81s 2026-04-17 07:17:34.007418 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.76s 2026-04-17 07:17:34.007428 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.46s 2026-04-17 07:17:34.007437 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.40s 2026-04-17 07:17:34.007446 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.26s 2026-04-17 07:17:34.007455 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.76s 2026-04-17 07:17:34.007465 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.26s 2026-04-17 07:17:34.007474 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.20s 2026-04-17 07:17:34.007482 | orchestrator | service-check-containers : cinder | Check containers -------------------- 5.16s 2026-04-17 07:17:34.007491 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.75s 2026-04-17 07:17:34.007500 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.75s 2026-04-17 07:17:34.007508 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.58s 2026-04-17 07:17:34.007517 | orchestrator | cinder : include_tasks -------------------------------------------------- 3.50s 2026-04-17 07:17:34.212565 | orchestrator | + osism apply -a upgrade barbican 2026-04-17 07:17:35.559049 | orchestrator | 2026-04-17 07:17:35 | INFO  | Prepare task for execution of barbican. 2026-04-17 07:17:35.624531 | orchestrator | 2026-04-17 07:17:35 | INFO  | Task cfca1263-16aa-403a-954a-16963433b57b (barbican) was prepared for execution. 2026-04-17 07:17:35.624667 | orchestrator | 2026-04-17 07:17:35 | INFO  | It takes a moment until task cfca1263-16aa-403a-954a-16963433b57b (barbican) has been started and output is visible here. 2026-04-17 07:17:50.277716 | orchestrator | 2026-04-17 07:17:50.277833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:17:50.277852 | orchestrator | 2026-04-17 07:17:50.277864 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:17:50.277875 | orchestrator | Friday 17 April 2026 07:17:40 +0000 (0:00:01.764) 0:00:01.764 ********** 2026-04-17 07:17:50.277886 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:17:50.277898 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:17:50.277909 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:17:50.277919 | orchestrator | 2026-04-17 07:17:50.277930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:17:50.277941 | orchestrator | Friday 17 April 2026 07:17:42 +0000 (0:00:01.968) 0:00:03.732 ********** 2026-04-17 07:17:50.277952 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-17 07:17:50.277964 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-17 07:17:50.277974 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-17 07:17:50.277985 | orchestrator | 2026-04-17 07:17:50.277996 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-17 07:17:50.278007 | orchestrator | 2026-04-17 07:17:50.278073 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-17 07:17:50.278086 | orchestrator | Friday 17 April 2026 07:17:45 +0000 (0:00:02.318) 0:00:06.051 ********** 2026-04-17 07:17:50.278097 | orchestrator | included: /ansible/roles/barbican/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:17:50.278110 | orchestrator | 2026-04-17 07:17:50.278121 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-17 07:17:50.278132 | orchestrator | Friday 17 April 2026 07:17:48 +0000 (0:00:03.120) 0:00:09.171 ********** 2026-04-17 07:17:50.278149 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:17:50.278191 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:17:50.278225 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:17:50.278241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:50.278256 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:50.278277 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:50.278292 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:50.278305 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:17:50.278326 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:01.041053 | orchestrator | 2026-04-17 07:18:01.041173 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-17 07:18:01.041192 | orchestrator | Friday 17 April 2026 07:17:51 +0000 (0:00:03.359) 0:00:12.530 ********** 2026-04-17 07:18:01.041205 | orchestrator | ok: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-17 07:18:01.041217 | orchestrator | ok: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-17 07:18:01.041229 | orchestrator | ok: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-17 07:18:01.041240 | orchestrator | 2026-04-17 07:18:01.041252 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-17 07:18:01.041263 | orchestrator | Friday 17 April 2026 07:17:53 +0000 (0:00:02.104) 0:00:14.635 ********** 2026-04-17 07:18:01.041275 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:18:01.041287 | orchestrator | 2026-04-17 07:18:01.041298 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-17 07:18:01.041310 | orchestrator | Friday 17 April 2026 07:17:54 +0000 (0:00:01.133) 0:00:15.769 ********** 2026-04-17 07:18:01.041321 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:18:01.041358 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:18:01.041370 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:18:01.041381 | orchestrator | 2026-04-17 07:18:01.041392 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-17 07:18:01.041403 | orchestrator | Friday 17 April 2026 07:17:56 +0000 (0:00:01.651) 0:00:17.420 ********** 2026-04-17 07:18:01.041415 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:18:01.041426 | orchestrator | 2026-04-17 07:18:01.041437 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-17 07:18:01.041448 | orchestrator | Friday 17 April 2026 07:17:58 +0000 (0:00:01.704) 0:00:19.125 ********** 2026-04-17 07:18:01.041464 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:01.041481 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:01.041514 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:01.041528 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:01.041550 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:01.041612 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:01.041628 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:01.041642 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:01.041665 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:04.288825 | orchestrator | 2026-04-17 07:18:04.288942 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-17 07:18:04.288995 | orchestrator | Friday 17 April 2026 07:18:02 +0000 (0:00:04.040) 0:00:23.165 ********** 2026-04-17 07:18:04.289015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:04.289042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:04.289055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:04.289067 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:18:04.289080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:04.289111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:04.289133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:04.289144 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:18:04.289156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:04.289168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:04.289180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:04.289191 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:18:04.289205 | orchestrator | 2026-04-17 07:18:04.289223 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-17 07:18:04.289242 | orchestrator | Friday 17 April 2026 07:18:03 +0000 (0:00:01.836) 0:00:25.002 ********** 2026-04-17 07:18:04.289272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:07.284878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:07.284986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:07.285002 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:18:07.285018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:07.285032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:07.285067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:07.285079 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:18:07.285110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:07.285124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:07.285136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:07.285147 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:18:07.285158 | orchestrator | 2026-04-17 07:18:07.285170 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-17 07:18:07.285182 | orchestrator | Friday 17 April 2026 07:18:05 +0000 (0:00:01.688) 0:00:26.691 ********** 2026-04-17 07:18:07.285194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:07.285222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:19.672342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:19.672453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:19.672470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:19.672507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:19.672520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:19.672607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:19.672622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:19.672634 | orchestrator | 2026-04-17 07:18:19.672647 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-17 07:18:19.672659 | orchestrator | Friday 17 April 2026 07:18:10 +0000 (0:00:04.646) 0:00:31.337 ********** 2026-04-17 07:18:19.672670 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:18:19.672682 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:18:19.672692 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:18:19.672703 | orchestrator | 2026-04-17 07:18:19.672714 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-17 07:18:19.672724 | orchestrator | Friday 17 April 2026 07:18:12 +0000 (0:00:02.587) 0:00:33.925 ********** 2026-04-17 07:18:19.672735 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:18:19.672746 | orchestrator | 2026-04-17 07:18:19.672757 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-17 07:18:19.672768 | orchestrator | Friday 17 April 2026 07:18:15 +0000 (0:00:02.410) 0:00:36.336 ********** 2026-04-17 07:18:19.672778 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:18:19.672789 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:18:19.672799 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:18:19.672810 | orchestrator | 2026-04-17 07:18:19.672820 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-17 07:18:19.672840 | orchestrator | Friday 17 April 2026 07:18:16 +0000 (0:00:01.607) 0:00:37.944 ********** 2026-04-17 07:18:19.672853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:19.672866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:19.672890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:25.580844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:25.580958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:25.580996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:25.581009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:25.581020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:25.581032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:25.581044 | orchestrator | 2026-04-17 07:18:25.581056 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-17 07:18:25.581086 | orchestrator | Friday 17 April 2026 07:18:24 +0000 (0:00:07.892) 0:00:45.836 ********** 2026-04-17 07:18:25.581101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:25.581123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:25.581135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:25.581146 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:18:25.581158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:25.581177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:29.397112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:29.397239 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:18:29.397258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:29.397271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:29.397283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:29.397294 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:18:29.397304 | orchestrator | 2026-04-17 07:18:29.397315 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-17 07:18:29.397325 | orchestrator | Friday 17 April 2026 07:18:27 +0000 (0:00:02.248) 0:00:48.084 ********** 2026-04-17 07:18:29.397352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:29.397378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:29.397391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:18:29.397402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:29.397414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:29.397433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:33.667795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:33.667909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:33.667925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:18:33.667938 | orchestrator | 2026-04-17 07:18:33.667952 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-17 07:18:33.667965 | orchestrator | Friday 17 April 2026 07:18:31 +0000 (0:00:04.331) 0:00:52.416 ********** 2026-04-17 07:18:33.667978 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:18:33.667991 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:18:33.668003 | orchestrator | } 2026-04-17 07:18:33.668015 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:18:33.668027 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:18:33.668038 | orchestrator | } 2026-04-17 07:18:33.668050 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:18:33.668061 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:18:33.668073 | orchestrator | } 2026-04-17 07:18:33.668085 | orchestrator | 2026-04-17 07:18:33.668097 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:18:33.668108 | orchestrator | Friday 17 April 2026 07:18:32 +0000 (0:00:01.391) 0:00:53.808 ********** 2026-04-17 07:18:33.668123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:33.668182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:33.668198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:33.668211 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:18:33.668224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:18:33.668237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:18:33.668249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:18:33.668269 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:18:33.668290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:21:35.694383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:21:35.694547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:21:35.694565 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:21:35.694579 | orchestrator | 2026-04-17 07:21:35.694591 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-17 07:21:35.694604 | orchestrator | Friday 17 April 2026 07:18:35 +0000 (0:00:02.567) 0:00:56.376 ********** 2026-04-17 07:21:35.694615 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:21:35.694626 | orchestrator | 2026-04-17 07:21:35.694638 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 07:21:35.694649 | orchestrator | Friday 17 April 2026 07:18:48 +0000 (0:00:13.535) 0:01:09.911 ********** 2026-04-17 07:21:35.694660 | orchestrator | 2026-04-17 07:21:35.694671 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 07:21:35.694682 | orchestrator | Friday 17 April 2026 07:18:49 +0000 (0:00:00.450) 0:01:10.362 ********** 2026-04-17 07:21:35.694692 | orchestrator | 2026-04-17 07:21:35.694703 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 07:21:35.694714 | orchestrator | Friday 17 April 2026 07:18:49 +0000 (0:00:00.447) 0:01:10.810 ********** 2026-04-17 07:21:35.694725 | orchestrator | 2026-04-17 07:21:35.694736 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-17 07:21:35.694773 | orchestrator | Friday 17 April 2026 07:18:50 +0000 (0:00:00.820) 0:01:11.631 ********** 2026-04-17 07:21:35.694784 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:21:35.694795 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:21:35.694806 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:21:35.694817 | orchestrator | 2026-04-17 07:21:35.694828 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-17 07:21:35.694839 | orchestrator | Friday 17 April 2026 07:21:05 +0000 (0:02:14.397) 0:03:26.028 ********** 2026-04-17 07:21:35.694851 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:21:35.694862 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:21:35.694873 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:21:35.694884 | orchestrator | 2026-04-17 07:21:35.694896 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-17 07:21:35.694908 | orchestrator | Friday 17 April 2026 07:21:17 +0000 (0:00:12.493) 0:03:38.522 ********** 2026-04-17 07:21:35.694921 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:21:35.694933 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:21:35.694945 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:21:35.694957 | orchestrator | 2026-04-17 07:21:35.694969 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:21:35.694983 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 07:21:35.694998 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:21:35.695011 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:21:35.695023 | orchestrator | 2026-04-17 07:21:35.695035 | orchestrator | 2026-04-17 07:21:35.695048 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:21:35.695061 | orchestrator | Friday 17 April 2026 07:21:35 +0000 (0:00:17.772) 0:03:56.294 ********** 2026-04-17 07:21:35.695073 | orchestrator | =============================================================================== 2026-04-17 07:21:35.695086 | orchestrator | barbican : Restart barbican-api container ----------------------------- 134.40s 2026-04-17 07:21:35.695098 | orchestrator | barbican : Restart barbican-worker container --------------------------- 17.77s 2026-04-17 07:21:35.695111 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.54s 2026-04-17 07:21:35.695123 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.50s 2026-04-17 07:21:35.695136 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.89s 2026-04-17 07:21:35.695164 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.65s 2026-04-17 07:21:35.695176 | orchestrator | service-check-containers : barbican | Check containers ------------------ 4.33s 2026-04-17 07:21:35.695187 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.04s 2026-04-17 07:21:35.695198 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.36s 2026-04-17 07:21:35.695208 | orchestrator | barbican : include_tasks ------------------------------------------------ 3.12s 2026-04-17 07:21:35.695219 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.59s 2026-04-17 07:21:35.695230 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.57s 2026-04-17 07:21:35.695242 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.41s 2026-04-17 07:21:35.695257 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.32s 2026-04-17 07:21:35.695274 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.25s 2026-04-17 07:21:35.695293 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.10s 2026-04-17 07:21:35.695305 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.97s 2026-04-17 07:21:35.695324 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.84s 2026-04-17 07:21:35.695336 | orchestrator | barbican : Flush handlers ----------------------------------------------- 1.72s 2026-04-17 07:21:35.695347 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.70s 2026-04-17 07:21:35.895223 | orchestrator | + osism apply -a upgrade designate 2026-04-17 07:21:37.186487 | orchestrator | 2026-04-17 07:21:37 | INFO  | Prepare task for execution of designate. 2026-04-17 07:21:37.252614 | orchestrator | 2026-04-17 07:21:37 | INFO  | Task 80091159-3240-461c-86ab-651d95da236c (designate) was prepared for execution. 2026-04-17 07:21:37.252709 | orchestrator | 2026-04-17 07:21:37 | INFO  | It takes a moment until task 80091159-3240-461c-86ab-651d95da236c (designate) has been started and output is visible here. 2026-04-17 07:21:50.915709 | orchestrator | 2026-04-17 07:21:50.915830 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:21:50.915847 | orchestrator | 2026-04-17 07:21:50.915860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:21:50.915871 | orchestrator | Friday 17 April 2026 07:21:42 +0000 (0:00:02.248) 0:00:02.248 ********** 2026-04-17 07:21:50.915882 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:21:50.915894 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:21:50.915905 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:21:50.915915 | orchestrator | 2026-04-17 07:21:50.915926 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:21:50.915937 | orchestrator | Friday 17 April 2026 07:21:44 +0000 (0:00:01.779) 0:00:04.027 ********** 2026-04-17 07:21:50.915949 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-17 07:21:50.915960 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-17 07:21:50.915971 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-17 07:21:50.915981 | orchestrator | 2026-04-17 07:21:50.915992 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-17 07:21:50.916003 | orchestrator | 2026-04-17 07:21:50.916013 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 07:21:50.916024 | orchestrator | Friday 17 April 2026 07:21:46 +0000 (0:00:01.822) 0:00:05.849 ********** 2026-04-17 07:21:50.916036 | orchestrator | included: /ansible/roles/designate/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:21:50.916047 | orchestrator | 2026-04-17 07:21:50.916057 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-17 07:21:50.916068 | orchestrator | Friday 17 April 2026 07:21:48 +0000 (0:00:02.031) 0:00:07.881 ********** 2026-04-17 07:21:50.916083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:21:50.916101 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:21:50.916158 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:21:50.916173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:21:50.916186 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:21:50.916198 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:21:50.916210 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:50.916229 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:50.916242 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:50.916263 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.967855 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.967983 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.968000 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.968037 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.968049 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.968061 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.968093 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.968107 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:21:58.968119 | orchestrator | 2026-04-17 07:21:58.968133 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-17 07:21:58.968145 | orchestrator | Friday 17 April 2026 07:21:53 +0000 (0:00:04.526) 0:00:12.407 ********** 2026-04-17 07:21:58.968157 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:21:58.968168 | orchestrator | 2026-04-17 07:21:58.968180 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-17 07:21:58.968200 | orchestrator | Friday 17 April 2026 07:21:54 +0000 (0:00:01.137) 0:00:13.545 ********** 2026-04-17 07:21:58.968211 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:21:58.968221 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:21:58.968232 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:21:58.968242 | orchestrator | 2026-04-17 07:21:58.968253 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 07:21:58.968264 | orchestrator | Friday 17 April 2026 07:21:55 +0000 (0:00:01.391) 0:00:14.936 ********** 2026-04-17 07:21:58.968275 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:21:58.968286 | orchestrator | 2026-04-17 07:21:58.968297 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-17 07:21:58.968308 | orchestrator | Friday 17 April 2026 07:21:57 +0000 (0:00:01.926) 0:00:16.863 ********** 2026-04-17 07:21:58.968319 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:21:58.968335 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:21:58.968356 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:02.987019 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987151 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987169 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987182 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987194 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987250 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987272 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987283 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987295 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987307 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987319 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:02.987339 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:05.296228 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:05.296336 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:05.296352 | orchestrator | 2026-04-17 07:22:05.296366 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-17 07:22:05.296379 | orchestrator | Friday 17 April 2026 07:22:04 +0000 (0:00:06.764) 0:00:23.627 ********** 2026-04-17 07:22:05.296442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:05.296459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:05.296472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:05.296528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:05.296543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:05.296555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:05.296567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:05.296579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:05.296613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586453 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:22:07.586465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586582 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:22:07.586593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.586603 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:22:07.586613 | orchestrator | 2026-04-17 07:22:07.586624 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-17 07:22:07.586634 | orchestrator | Friday 17 April 2026 07:22:06 +0000 (0:00:02.498) 0:00:26.126 ********** 2026-04-17 07:22:07.586646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:07.586667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:07.586697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.955228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:07.955339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.955357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:07.955370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:07.955493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.955545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.955568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:07.955588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.955609 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:22:07.955632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.955667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.955679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:07.955707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:12.444723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:12.444837 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:22:12.444856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:12.444870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:12.444903 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:22:12.444915 | orchestrator | 2026-04-17 07:22:12.444927 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-17 07:22:12.444943 | orchestrator | Friday 17 April 2026 07:22:09 +0000 (0:00:02.496) 0:00:28.623 ********** 2026-04-17 07:22:12.444962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:12.445025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:12.445046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:12.445067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:12.445099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:12.445117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:12.445143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:12.445174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:19.528567 | orchestrator | 2026-04-17 07:22:19.528580 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-17 07:22:19.528592 | orchestrator | Friday 17 April 2026 07:22:16 +0000 (0:00:07.211) 0:00:35.834 ********** 2026-04-17 07:22:19.528625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:19.528650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:29.163161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:29.163310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:29.163681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:42.530798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:42.530919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:42.530937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:42.530951 | orchestrator | 2026-04-17 07:22:42.530965 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-17 07:22:42.530978 | orchestrator | Friday 17 April 2026 07:22:32 +0000 (0:00:16.459) 0:00:52.293 ********** 2026-04-17 07:22:42.530990 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 07:22:42.531003 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 07:22:42.531014 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 07:22:42.531025 | orchestrator | 2026-04-17 07:22:42.531037 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-17 07:22:42.531048 | orchestrator | Friday 17 April 2026 07:22:37 +0000 (0:00:04.667) 0:00:56.961 ********** 2026-04-17 07:22:42.531076 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 07:22:42.531088 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 07:22:42.531099 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 07:22:42.531110 | orchestrator | 2026-04-17 07:22:42.531122 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-17 07:22:42.531133 | orchestrator | Friday 17 April 2026 07:22:41 +0000 (0:00:03.571) 0:01:00.532 ********** 2026-04-17 07:22:42.531146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:42.531209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:42.531224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:42.531237 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:42.531270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:42.531293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:42.531314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:45.849691 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:45.849794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:45.849811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:45.849839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:45.849876 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:45.849889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:45.849918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:45.849931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:45.849942 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:45.849958 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:45.849977 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:45.849989 | orchestrator | 2026-04-17 07:22:45.850001 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-17 07:22:45.850014 | orchestrator | Friday 17 April 2026 07:22:45 +0000 (0:00:04.056) 0:01:04.589 ********** 2026-04-17 07:22:45.850135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:47.043472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:47.043575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:47.043611 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:47.043647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:47.043662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:47.043693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:47.043706 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:47.043717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:47.043734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:47.043784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:47.043797 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:47.043817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:51.052438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:51.052568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:51.052590 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:51.052659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:51.052680 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:51.052700 | orchestrator | 2026-04-17 07:22:51.052719 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 07:22:51.052739 | orchestrator | Friday 17 April 2026 07:22:49 +0000 (0:00:03.834) 0:01:08.424 ********** 2026-04-17 07:22:51.052757 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:22:51.052776 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:22:51.052792 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:22:51.052810 | orchestrator | 2026-04-17 07:22:51.052828 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-17 07:22:51.052846 | orchestrator | Friday 17 April 2026 07:22:50 +0000 (0:00:01.457) 0:01:09.882 ********** 2026-04-17 07:22:51.052894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:51.052922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:51.052965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:51.052995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:51.053016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:51.053031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:51.053055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258207 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:22:54.258318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:54.258429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258515 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:22:54.258553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:22:54.258615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:22:54.258634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:22:54.258679 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:22:54.258690 | orchestrator | 2026-04-17 07:22:54.258703 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-17 07:22:54.258715 | orchestrator | Friday 17 April 2026 07:22:52 +0000 (0:00:02.172) 0:01:12.055 ********** 2026-04-17 07:22:54.258736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:57.471252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:57.471410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:22:57.471429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 07:22:57.471590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:23:01.448590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:23:01.448723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 07:23:01.448747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:23:01.448767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:23:01.448786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:23:01.448833 | orchestrator | 2026-04-17 07:23:01.448855 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-17 07:23:01.448875 | orchestrator | Friday 17 April 2026 07:22:59 +0000 (0:00:06.836) 0:01:18.891 ********** 2026-04-17 07:23:01.448895 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:23:01.448915 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:23:01.448935 | orchestrator | } 2026-04-17 07:23:01.448954 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:23:01.448972 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:23:01.448992 | orchestrator | } 2026-04-17 07:23:01.449012 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:23:01.449031 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:23:01.449050 | orchestrator | } 2026-04-17 07:23:01.449080 | orchestrator | 2026-04-17 07:23:01.449101 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:23:01.449120 | orchestrator | Friday 17 April 2026 07:23:00 +0000 (0:00:01.394) 0:01:20.286 ********** 2026-04-17 07:23:01.449183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:23:01.449213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:23:01.449236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:23:01.449271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:23:01.449293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:23:01.449313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:23:01.449334 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:23:01.449421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:23:20.171807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:23:20.171915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:23:20.171961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:23:20.171982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:23:20.171995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:23:20.172007 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:23:20.172052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:23:20.172069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 07:23:20.172089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 07:23:20.172101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 07:23:20.172112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 07:23:20.172123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:23:20.172134 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:23:20.172145 | orchestrator | 2026-04-17 07:23:20.172157 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-17 07:23:20.172169 | orchestrator | Friday 17 April 2026 07:23:03 +0000 (0:00:02.182) 0:01:22.469 ********** 2026-04-17 07:23:20.172180 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:23:20.172191 | orchestrator | 2026-04-17 07:23:20.172201 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 07:23:20.172217 | orchestrator | Friday 17 April 2026 07:23:19 +0000 (0:00:15.988) 0:01:38.458 ********** 2026-04-17 07:23:20.172228 | orchestrator | 2026-04-17 07:23:20.172239 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 07:23:20.172250 | orchestrator | Friday 17 April 2026 07:23:19 +0000 (0:00:00.619) 0:01:39.077 ********** 2026-04-17 07:23:20.172260 | orchestrator | 2026-04-17 07:23:20.172271 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 07:23:20.172289 | orchestrator | Friday 17 April 2026 07:23:20 +0000 (0:00:00.482) 0:01:39.559 ********** 2026-04-17 07:25:38.453141 | orchestrator | 2026-04-17 07:25:38.453361 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-17 07:25:38.453395 | orchestrator | Friday 17 April 2026 07:23:20 +0000 (0:00:00.838) 0:01:40.398 ********** 2026-04-17 07:25:38.453451 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:25:38.453474 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:25:38.453491 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:25:38.453509 | orchestrator | 2026-04-17 07:25:38.453528 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-17 07:25:38.453547 | orchestrator | Friday 17 April 2026 07:23:36 +0000 (0:00:15.118) 0:01:55.517 ********** 2026-04-17 07:25:38.453565 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:25:38.453583 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:25:38.453601 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:25:38.453619 | orchestrator | 2026-04-17 07:25:38.453637 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-17 07:25:38.453655 | orchestrator | Friday 17 April 2026 07:23:49 +0000 (0:00:13.254) 0:02:08.772 ********** 2026-04-17 07:25:38.453674 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:25:38.453692 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:25:38.453711 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:25:38.453730 | orchestrator | 2026-04-17 07:25:38.453748 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-17 07:25:38.453767 | orchestrator | Friday 17 April 2026 07:24:03 +0000 (0:00:13.639) 0:02:22.411 ********** 2026-04-17 07:25:38.453785 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:25:38.453803 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:25:38.453820 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:25:38.453838 | orchestrator | 2026-04-17 07:25:38.453855 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-17 07:25:38.453871 | orchestrator | Friday 17 April 2026 07:25:01 +0000 (0:00:58.447) 0:03:20.859 ********** 2026-04-17 07:25:38.453887 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:25:38.453905 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:25:38.453922 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:25:38.453940 | orchestrator | 2026-04-17 07:25:38.453957 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-17 07:25:38.453976 | orchestrator | Friday 17 April 2026 07:25:14 +0000 (0:00:13.493) 0:03:34.353 ********** 2026-04-17 07:25:38.453994 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:25:38.454012 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:25:38.454103 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:25:38.454130 | orchestrator | 2026-04-17 07:25:38.454148 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-17 07:25:38.454164 | orchestrator | Friday 17 April 2026 07:25:29 +0000 (0:00:14.114) 0:03:48.468 ********** 2026-04-17 07:25:38.454181 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:25:38.454197 | orchestrator | 2026-04-17 07:25:38.454214 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:25:38.454232 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 07:25:38.454275 | orchestrator | testbed-node-1 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:25:38.454294 | orchestrator | testbed-node-2 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:25:38.454314 | orchestrator | 2026-04-17 07:25:38.454331 | orchestrator | 2026-04-17 07:25:38.454351 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:25:38.454369 | orchestrator | Friday 17 April 2026 07:25:38 +0000 (0:00:09.024) 0:03:57.493 ********** 2026-04-17 07:25:38.454380 | orchestrator | =============================================================================== 2026-04-17 07:25:38.454391 | orchestrator | designate : Restart designate-producer container ----------------------- 58.45s 2026-04-17 07:25:38.454401 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.46s 2026-04-17 07:25:38.454428 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.99s 2026-04-17 07:25:38.454438 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.12s 2026-04-17 07:25:38.454449 | orchestrator | designate : Restart designate-worker container ------------------------- 14.11s 2026-04-17 07:25:38.454460 | orchestrator | designate : Restart designate-central container ------------------------ 13.64s 2026-04-17 07:25:38.454471 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.49s 2026-04-17 07:25:38.454481 | orchestrator | designate : Restart designate-api container ---------------------------- 13.26s 2026-04-17 07:25:38.454492 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 9.02s 2026-04-17 07:25:38.454503 | orchestrator | designate : Copying over config.json files for services ----------------- 7.21s 2026-04-17 07:25:38.454513 | orchestrator | service-check-containers : designate | Check containers ----------------- 6.84s 2026-04-17 07:25:38.454523 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.77s 2026-04-17 07:25:38.454534 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.67s 2026-04-17 07:25:38.454561 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.53s 2026-04-17 07:25:38.454572 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.06s 2026-04-17 07:25:38.454583 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.83s 2026-04-17 07:25:38.454593 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.57s 2026-04-17 07:25:38.454625 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 2.50s 2026-04-17 07:25:38.454637 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 2.50s 2026-04-17 07:25:38.454648 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.18s 2026-04-17 07:25:38.647184 | orchestrator | + osism apply -a upgrade ceilometer 2026-04-17 07:25:39.915126 | orchestrator | 2026-04-17 07:25:39 | INFO  | Prepare task for execution of ceilometer. 2026-04-17 07:25:39.979946 | orchestrator | 2026-04-17 07:25:39 | INFO  | Task f00d30ea-883b-4e5f-b279-3d0db25fad7c (ceilometer) was prepared for execution. 2026-04-17 07:25:39.980069 | orchestrator | 2026-04-17 07:25:39 | INFO  | It takes a moment until task f00d30ea-883b-4e5f-b279-3d0db25fad7c (ceilometer) has been started and output is visible here. 2026-04-17 07:25:53.142384 | orchestrator | 2026-04-17 07:25:53.142493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:25:53.142511 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 07:25:53.142525 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 07:25:53.142547 | orchestrator | 2026-04-17 07:25:53.142558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:25:53.142569 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 07:25:53.142580 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 07:25:53.142601 | orchestrator | Friday 17 April 2026 07:25:44 +0000 (0:00:01.152) 0:00:01.152 ********** 2026-04-17 07:25:53.142613 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:25:53.142624 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:25:53.142635 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:25:53.142647 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:25:53.142658 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:25:53.142669 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:25:53.142680 | orchestrator | 2026-04-17 07:25:53.142691 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:25:53.142702 | orchestrator | Friday 17 April 2026 07:25:46 +0000 (0:00:01.717) 0:00:02.869 ********** 2026-04-17 07:25:53.142738 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-17 07:25:53.142750 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-17 07:25:53.142761 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-17 07:25:53.142772 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-17 07:25:53.142783 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-17 07:25:53.142793 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-17 07:25:53.142804 | orchestrator | 2026-04-17 07:25:53.142815 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-17 07:25:53.142826 | orchestrator | 2026-04-17 07:25:53.142836 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-17 07:25:53.142847 | orchestrator | Friday 17 April 2026 07:25:47 +0000 (0:00:01.269) 0:00:04.139 ********** 2026-04-17 07:25:53.142859 | orchestrator | included: /ansible/roles/ceilometer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 07:25:53.142872 | orchestrator | 2026-04-17 07:25:53.142885 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-17 07:25:53.142898 | orchestrator | Friday 17 April 2026 07:25:49 +0000 (0:00:01.792) 0:00:05.931 ********** 2026-04-17 07:25:53.142915 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:25:53.142946 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:25:53.142961 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:25:53.142994 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:25:53.143016 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:25:53.143036 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:25:53.143058 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:25:53.143089 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:25:53.143111 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:25:53.143132 | orchestrator | 2026-04-17 07:25:53.143153 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-17 07:25:53.143173 | orchestrator | Friday 17 April 2026 07:25:51 +0000 (0:00:02.238) 0:00:08.169 ********** 2026-04-17 07:25:53.143194 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:25:56.882102 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 07:25:56.882223 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 07:25:56.882325 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 07:25:56.882339 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 07:25:56.882350 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 07:25:56.882361 | orchestrator | 2026-04-17 07:25:56.882373 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-17 07:25:56.882386 | orchestrator | Friday 17 April 2026 07:25:54 +0000 (0:00:02.650) 0:00:10.820 ********** 2026-04-17 07:25:56.882397 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:25:56.882409 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:25:56.882420 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:25:56.882431 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:25:56.882442 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:25:56.882453 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:25:56.882463 | orchestrator | 2026-04-17 07:25:56.882475 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-17 07:25:56.882486 | orchestrator | Friday 17 April 2026 07:25:54 +0000 (0:00:00.442) 0:00:11.263 ********** 2026-04-17 07:25:56.882497 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:25:56.882509 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:25:56.882520 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:25:56.882530 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:25:56.882541 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:25:56.882552 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:25:56.882563 | orchestrator | 2026-04-17 07:25:56.882575 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-17 07:25:56.882589 | orchestrator | Friday 17 April 2026 07:25:55 +0000 (0:00:00.698) 0:00:11.962 ********** 2026-04-17 07:25:56.882601 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:25:56.882613 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:25:56.882626 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:25:56.882638 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:25:56.882649 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:25:56.882660 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:25:56.882671 | orchestrator | 2026-04-17 07:25:56.882682 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-17 07:25:56.882693 | orchestrator | Friday 17 April 2026 07:25:55 +0000 (0:00:00.668) 0:00:12.631 ********** 2026-04-17 07:25:56.882708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:25:56.882724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:25:56.882749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:25:56.882789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:25:56.882802 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:25:56.882813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:25:56.882825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:25:56.882836 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:25:56.882847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:25:56.882859 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:25:56.882871 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:25:56.882882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:25:56.882900 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:25:56.882916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:25:56.882927 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:25:56.882939 | orchestrator | 2026-04-17 07:25:56.882950 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-17 07:25:56.882961 | orchestrator | Friday 17 April 2026 07:25:56 +0000 (0:00:00.763) 0:00:13.395 ********** 2026-04-17 07:25:56.882980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:03.818226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:03.818407 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:03.818426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:03.818439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:03.818451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:03.818500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:03.818512 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:03.818524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:03.818537 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:03.818566 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:03.818578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:03.818589 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:03.818601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:03.818612 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:03.818624 | orchestrator | 2026-04-17 07:26:03.818637 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-17 07:26:03.818649 | orchestrator | Friday 17 April 2026 07:25:57 +0000 (0:00:00.916) 0:00:14.311 ********** 2026-04-17 07:26:03.818661 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:26:03.818680 | orchestrator | 2026-04-17 07:26:03.818691 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-17 07:26:03.818703 | orchestrator | Friday 17 April 2026 07:25:58 +0000 (0:00:00.765) 0:00:15.077 ********** 2026-04-17 07:26:03.818714 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:26:03.818726 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:26:03.818736 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:26:03.818747 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:26:03.818760 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:26:03.818772 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:26:03.818785 | orchestrator | 2026-04-17 07:26:03.818797 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-17 07:26:03.818810 | orchestrator | Friday 17 April 2026 07:25:59 +0000 (0:00:00.657) 0:00:15.734 ********** 2026-04-17 07:26:03.818821 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:26:03.818834 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:26:03.818846 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:26:03.818858 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:26:03.818870 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:26:03.818883 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:26:03.818895 | orchestrator | 2026-04-17 07:26:03.818907 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-17 07:26:03.818920 | orchestrator | Friday 17 April 2026 07:26:00 +0000 (0:00:01.257) 0:00:16.992 ********** 2026-04-17 07:26:03.818937 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:03.818950 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:03.818962 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:03.818975 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:03.818987 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:03.818999 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:03.819011 | orchestrator | 2026-04-17 07:26:03.819024 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-17 07:26:03.819036 | orchestrator | Friday 17 April 2026 07:26:00 +0000 (0:00:00.661) 0:00:17.654 ********** 2026-04-17 07:26:03.819049 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:03.819062 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:03.819074 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:03.819087 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:03.819099 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:03.819110 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:03.819121 | orchestrator | 2026-04-17 07:26:03.819132 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-17 07:26:03.819142 | orchestrator | Friday 17 April 2026 07:26:01 +0000 (0:00:00.842) 0:00:18.496 ********** 2026-04-17 07:26:03.819153 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 07:26:03.819164 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 07:26:03.819175 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:26:03.819185 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 07:26:03.819196 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 07:26:03.819206 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 07:26:03.819217 | orchestrator | 2026-04-17 07:26:03.819228 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-17 07:26:03.819265 | orchestrator | Friday 17 April 2026 07:26:03 +0000 (0:00:01.732) 0:00:20.229 ********** 2026-04-17 07:26:03.819288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:07.365568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:07.365656 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:07.365668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:07.365676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:07.365683 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:07.365706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:07.365712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:07.365719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:07.365751 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:07.365772 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:07.365780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:07.365787 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:07.365794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:07.365801 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:07.365807 | orchestrator | 2026-04-17 07:26:07.365814 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-17 07:26:07.365821 | orchestrator | Friday 17 April 2026 07:26:04 +0000 (0:00:01.121) 0:00:21.350 ********** 2026-04-17 07:26:07.365828 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:07.365834 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:07.365839 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:07.365845 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:07.365851 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:07.365858 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:07.365864 | orchestrator | 2026-04-17 07:26:07.365871 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-17 07:26:07.365877 | orchestrator | Friday 17 April 2026 07:26:05 +0000 (0:00:00.811) 0:00:22.161 ********** 2026-04-17 07:26:07.365884 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 07:26:07.365890 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:26:07.365896 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 07:26:07.365902 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 07:26:07.365914 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 07:26:07.365920 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 07:26:07.365926 | orchestrator | 2026-04-17 07:26:07.365932 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-17 07:26:07.365938 | orchestrator | Friday 17 April 2026 07:26:06 +0000 (0:00:01.556) 0:00:23.718 ********** 2026-04-17 07:26:07.365945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:07.365959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:07.365966 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:07.365981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:12.821416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:12.821530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:12.821562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:12.821576 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:12.821591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:12.821628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:12.821640 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:12.821651 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:12.821662 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:12.821692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:12.821704 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:12.821715 | orchestrator | 2026-04-17 07:26:12.821727 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-17 07:26:12.821740 | orchestrator | Friday 17 April 2026 07:26:08 +0000 (0:00:01.147) 0:00:24.866 ********** 2026-04-17 07:26:12.821750 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:12.821761 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:12.821772 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:12.821783 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:12.821793 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:12.821804 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:12.821814 | orchestrator | 2026-04-17 07:26:12.821825 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-17 07:26:12.821836 | orchestrator | Friday 17 April 2026 07:26:08 +0000 (0:00:00.624) 0:00:25.490 ********** 2026-04-17 07:26:12.821847 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:12.821858 | orchestrator | 2026-04-17 07:26:12.821869 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-17 07:26:12.821880 | orchestrator | Friday 17 April 2026 07:26:08 +0000 (0:00:00.140) 0:00:25.631 ********** 2026-04-17 07:26:12.821891 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:12.821904 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:12.821917 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:12.821930 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:12.821942 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:12.821954 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:12.821966 | orchestrator | 2026-04-17 07:26:12.821978 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-17 07:26:12.821991 | orchestrator | Friday 17 April 2026 07:26:09 +0000 (0:00:00.781) 0:00:26.412 ********** 2026-04-17 07:26:12.822004 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 07:26:12.822085 | orchestrator | 2026-04-17 07:26:12.822100 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-17 07:26:12.822122 | orchestrator | Friday 17 April 2026 07:26:11 +0000 (0:00:01.737) 0:00:28.150 ********** 2026-04-17 07:26:12.822141 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:12.822155 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:12.822166 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:12.822188 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:14.571285 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:14.571408 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:14.571467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:14.572445 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:14.572480 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:14.572500 | orchestrator | 2026-04-17 07:26:14.572523 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-17 07:26:14.572545 | orchestrator | Friday 17 April 2026 07:26:13 +0000 (0:00:02.161) 0:00:30.312 ********** 2026-04-17 07:26:14.572591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:14.572612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:14.572639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:14.572659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:14.572671 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:14.572684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:14.572697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:14.572709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:14.572721 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:14.572732 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:14.572750 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:18.228195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:18.228366 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:18.228387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:18.228399 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:18.228411 | orchestrator | 2026-04-17 07:26:18.228423 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-17 07:26:18.228449 | orchestrator | Friday 17 April 2026 07:26:15 +0000 (0:00:01.435) 0:00:31.748 ********** 2026-04-17 07:26:18.228462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:18.228475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:18.228487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:18.228499 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:18.228529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:18.228549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:18.228560 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:18.228577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:18.228588 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:18.228600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:18.228611 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:18.228622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:18.228633 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:18.228644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:18.228656 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:18.228673 | orchestrator | 2026-04-17 07:26:18.228685 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-17 07:26:18.228696 | orchestrator | Friday 17 April 2026 07:26:16 +0000 (0:00:01.904) 0:00:33.652 ********** 2026-04-17 07:26:18.228719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:22.564049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:22.564171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:22.564189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:22.564204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:22.564215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:22.564282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:22.564313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:22.564331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:22.564344 | orchestrator | 2026-04-17 07:26:22.564357 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-17 07:26:22.564369 | orchestrator | Friday 17 April 2026 07:26:19 +0000 (0:00:02.336) 0:00:35.988 ********** 2026-04-17 07:26:22.564380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:22.564392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:22.564426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:22.564444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:32.752535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:32.752689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:32.752715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:32.752736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:32.752785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:32.752805 | orchestrator | 2026-04-17 07:26:32.752826 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-17 07:26:32.752846 | orchestrator | Friday 17 April 2026 07:26:24 +0000 (0:00:05.084) 0:00:41.073 ********** 2026-04-17 07:26:32.752863 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 07:26:32.752882 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:26:32.752900 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 07:26:32.752917 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 07:26:32.752934 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 07:26:32.752951 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 07:26:32.752968 | orchestrator | 2026-04-17 07:26:32.752985 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-17 07:26:32.753002 | orchestrator | Friday 17 April 2026 07:26:26 +0000 (0:00:01.726) 0:00:42.800 ********** 2026-04-17 07:26:32.753020 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:32.753038 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:32.753058 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:32.753079 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:32.753097 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:32.753140 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:32.753161 | orchestrator | 2026-04-17 07:26:32.753180 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-17 07:26:32.753201 | orchestrator | Friday 17 April 2026 07:26:26 +0000 (0:00:00.654) 0:00:43.455 ********** 2026-04-17 07:26:32.753220 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:32.753289 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:32.753309 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:32.753329 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:26:32.753349 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:26:32.753366 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:26:32.753383 | orchestrator | 2026-04-17 07:26:32.753401 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-17 07:26:32.753418 | orchestrator | Friday 17 April 2026 07:26:28 +0000 (0:00:01.408) 0:00:44.863 ********** 2026-04-17 07:26:32.753435 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:32.753451 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:32.753467 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:32.753485 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:26:32.753502 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:26:32.753518 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:26:32.753535 | orchestrator | 2026-04-17 07:26:32.753551 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-17 07:26:32.753568 | orchestrator | Friday 17 April 2026 07:26:29 +0000 (0:00:01.402) 0:00:46.266 ********** 2026-04-17 07:26:32.753585 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:26:32.753611 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 07:26:32.753629 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 07:26:32.753646 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 07:26:32.753662 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 07:26:32.753678 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 07:26:32.753694 | orchestrator | 2026-04-17 07:26:32.753728 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-17 07:26:32.753744 | orchestrator | Friday 17 April 2026 07:26:31 +0000 (0:00:01.721) 0:00:47.987 ********** 2026-04-17 07:26:32.753763 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:32.753782 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:32.753800 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:32.753820 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:32.753853 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:34.431365 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:34.431496 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:34.431514 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:34.431526 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:34.431538 | orchestrator | 2026-04-17 07:26:34.431551 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-17 07:26:34.431563 | orchestrator | Friday 17 April 2026 07:26:33 +0000 (0:00:02.209) 0:00:50.197 ********** 2026-04-17 07:26:34.431574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:34.431587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:34.431598 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:34.431634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:34.431657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:34.431668 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:34.431679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:34.431690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:34.431701 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:34.431712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:34.431724 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:34.431742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:38.095573 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:38.095676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:38.095691 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:38.095699 | orchestrator | 2026-04-17 07:26:38.095707 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-17 07:26:38.095716 | orchestrator | Friday 17 April 2026 07:26:34 +0000 (0:00:01.122) 0:00:51.320 ********** 2026-04-17 07:26:38.095723 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:38.095730 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:38.095738 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:38.095749 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:38.095761 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:38.095773 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:38.095786 | orchestrator | 2026-04-17 07:26:38.095800 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-17 07:26:38.095812 | orchestrator | Friday 17 April 2026 07:26:35 +0000 (0:00:00.668) 0:00:51.988 ********** 2026-04-17 07:26:38.095824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:38.095839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:38.095852 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:38.095865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:38.095911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:38.095923 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:26:38.095962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:38.095975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:38.095987 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:26:38.095999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:38.096011 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:26:38.096023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:38.096036 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:26:38.096050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:38.096074 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:26:38.096088 | orchestrator | 2026-04-17 07:26:38.096100 | orchestrator | TASK [service-check-containers : ceilometer | Check containers] **************** 2026-04-17 07:26:38.096112 | orchestrator | Friday 17 April 2026 07:26:36 +0000 (0:00:01.545) 0:00:53.534 ********** 2026-04-17 07:26:38.096137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:40.247117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:40.247220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-17 07:26:40.247268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:40.247283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:40.247318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:40.247331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:40.247367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:40.247380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-17 07:26:40.247392 | orchestrator | 2026-04-17 07:26:40.247405 | orchestrator | TASK [service-check-containers : ceilometer | Notify handlers to restart containers] *** 2026-04-17 07:26:40.247417 | orchestrator | Friday 17 April 2026 07:26:39 +0000 (0:00:02.408) 0:00:55.942 ********** 2026-04-17 07:26:40.247429 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:26:40.247442 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:26:40.247453 | orchestrator | } 2026-04-17 07:26:40.247464 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:26:40.247475 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:26:40.247486 | orchestrator | } 2026-04-17 07:26:40.247497 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:26:40.247507 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:26:40.247518 | orchestrator | } 2026-04-17 07:26:40.247529 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 07:26:40.247539 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:26:40.247550 | orchestrator | } 2026-04-17 07:26:40.247560 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 07:26:40.247571 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:26:40.247581 | orchestrator | } 2026-04-17 07:26:40.247599 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 07:26:40.247610 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:26:40.247621 | orchestrator | } 2026-04-17 07:26:40.247633 | orchestrator | 2026-04-17 07:26:40.247647 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:26:40.247659 | orchestrator | Friday 17 April 2026 07:26:39 +0000 (0:00:00.670) 0:00:56.613 ********** 2026-04-17 07:26:40.247673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:26:40.247686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:26:40.247699 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:26:40.247719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:27:26.702902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:27:26.703017 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:27:26.703025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-17 07:27:26.703031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:27:26.703055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:27:26.703061 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:27:26.703064 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:27:26.703068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:27:26.703072 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:27:26.703092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-17 07:27:26.703096 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:27:26.703100 | orchestrator | 2026-04-17 07:27:26.703105 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-17 07:27:26.703110 | orchestrator | Friday 17 April 2026 07:26:41 +0000 (0:00:01.940) 0:00:58.553 ********** 2026-04-17 07:27:26.703114 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:27:26.703118 | orchestrator | 2026-04-17 07:27:26.703122 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 07:27:26.703126 | orchestrator | Friday 17 April 2026 07:26:49 +0000 (0:00:08.117) 0:01:06.671 ********** 2026-04-17 07:27:26.703129 | orchestrator | 2026-04-17 07:27:26.703133 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 07:27:26.703137 | orchestrator | Friday 17 April 2026 07:26:50 +0000 (0:00:00.086) 0:01:06.758 ********** 2026-04-17 07:27:26.703141 | orchestrator | 2026-04-17 07:27:26.703144 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 07:27:26.703148 | orchestrator | Friday 17 April 2026 07:26:50 +0000 (0:00:00.074) 0:01:06.832 ********** 2026-04-17 07:27:26.703156 | orchestrator | 2026-04-17 07:27:26.703160 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 07:27:26.703163 | orchestrator | Friday 17 April 2026 07:26:50 +0000 (0:00:00.358) 0:01:07.191 ********** 2026-04-17 07:27:26.703168 | orchestrator | 2026-04-17 07:27:26.703171 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 07:27:26.703175 | orchestrator | Friday 17 April 2026 07:26:50 +0000 (0:00:00.077) 0:01:07.268 ********** 2026-04-17 07:27:26.703179 | orchestrator | 2026-04-17 07:27:26.703182 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-17 07:27:26.703186 | orchestrator | Friday 17 April 2026 07:26:50 +0000 (0:00:00.093) 0:01:07.362 ********** 2026-04-17 07:27:26.703190 | orchestrator | 2026-04-17 07:27:26.703193 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-17 07:27:26.703197 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 07:27:26.703203 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 07:27:26.703210 | orchestrator | Friday 17 April 2026 07:26:50 +0000 (0:00:00.075) 0:01:07.438 ********** 2026-04-17 07:27:26.703214 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:27:26.703218 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:27:26.703221 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:27:26.703225 | orchestrator | 2026-04-17 07:27:26.703229 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-17 07:27:26.703232 | orchestrator | Friday 17 April 2026 07:27:02 +0000 (0:00:11.708) 0:01:19.146 ********** 2026-04-17 07:27:26.703236 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:27:26.703240 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:27:26.703243 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:27:26.703247 | orchestrator | 2026-04-17 07:27:26.703251 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-17 07:27:26.703254 | orchestrator | Friday 17 April 2026 07:27:13 +0000 (0:00:11.224) 0:01:30.371 ********** 2026-04-17 07:27:26.703258 | orchestrator | changed: [testbed-node-5] 2026-04-17 07:27:26.703262 | orchestrator | changed: [testbed-node-3] 2026-04-17 07:27:26.703266 | orchestrator | changed: [testbed-node-4] 2026-04-17 07:27:26.703269 | orchestrator | 2026-04-17 07:27:26.703273 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:27:26.703278 | orchestrator | testbed-node-0 : ok=26  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-17 07:27:26.703355 | orchestrator | testbed-node-1 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 07:27:26.703360 | orchestrator | testbed-node-2 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 07:27:26.703364 | orchestrator | testbed-node-3 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-17 07:27:26.703368 | orchestrator | testbed-node-4 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-17 07:27:26.703372 | orchestrator | testbed-node-5 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-17 07:27:26.703376 | orchestrator | 2026-04-17 07:27:26.703380 | orchestrator | 2026-04-17 07:27:26.703383 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:27:26.703387 | orchestrator | Friday 17 April 2026 07:27:26 +0000 (0:00:13.027) 0:01:43.398 ********** 2026-04-17 07:27:26.703391 | orchestrator | =============================================================================== 2026-04-17 07:27:26.703399 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 13.03s 2026-04-17 07:27:26.703403 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 11.71s 2026-04-17 07:27:26.703406 | orchestrator | ceilometer : Restart ceilometer-central container ---------------------- 11.22s 2026-04-17 07:27:26.703410 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 8.12s 2026-04-17 07:27:26.703414 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.08s 2026-04-17 07:27:26.703444 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 2.65s 2026-04-17 07:27:27.156703 | orchestrator | service-check-containers : ceilometer | Check containers ---------------- 2.41s 2026-04-17 07:27:27.156814 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.34s 2026-04-17 07:27:27.156830 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 2.24s 2026-04-17 07:27:27.156842 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.21s 2026-04-17 07:27:27.156853 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.16s 2026-04-17 07:27:27.156864 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.94s 2026-04-17 07:27:27.156875 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.90s 2026-04-17 07:27:27.156885 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 1.79s 2026-04-17 07:27:27.156896 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 1.74s 2026-04-17 07:27:27.156906 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.73s 2026-04-17 07:27:27.156917 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.73s 2026-04-17 07:27:27.156927 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.72s 2026-04-17 07:27:27.156938 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.72s 2026-04-17 07:27:27.156948 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.56s 2026-04-17 07:27:27.469184 | orchestrator | + osism apply -a upgrade aodh 2026-04-17 07:27:28.844638 | orchestrator | 2026-04-17 07:27:28 | INFO  | Prepare task for execution of aodh. 2026-04-17 07:27:28.916875 | orchestrator | 2026-04-17 07:27:28 | INFO  | Task 66984dc6-bc8d-4861-b311-77cdc99a98ec (aodh) was prepared for execution. 2026-04-17 07:27:28.916971 | orchestrator | 2026-04-17 07:27:28 | INFO  | It takes a moment until task 66984dc6-bc8d-4861-b311-77cdc99a98ec (aodh) has been started and output is visible here. 2026-04-17 07:27:43.440949 | orchestrator | 2026-04-17 07:27:43.441068 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:27:43.441080 | orchestrator | 2026-04-17 07:27:43.441088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:27:43.441095 | orchestrator | Friday 17 April 2026 07:27:34 +0000 (0:00:01.638) 0:00:01.638 ********** 2026-04-17 07:27:43.441100 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:27:43.441106 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:27:43.441113 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:27:43.441119 | orchestrator | 2026-04-17 07:27:43.441127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:27:43.441132 | orchestrator | Friday 17 April 2026 07:27:35 +0000 (0:00:01.757) 0:00:03.396 ********** 2026-04-17 07:27:43.441136 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-17 07:27:43.441141 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-17 07:27:43.441145 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-17 07:27:43.441148 | orchestrator | 2026-04-17 07:27:43.441152 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-17 07:27:43.441156 | orchestrator | 2026-04-17 07:27:43.441160 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-17 07:27:43.441194 | orchestrator | Friday 17 April 2026 07:27:37 +0000 (0:00:01.545) 0:00:04.941 ********** 2026-04-17 07:27:43.441200 | orchestrator | included: /ansible/roles/aodh/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:27:43.441208 | orchestrator | 2026-04-17 07:27:43.441213 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-17 07:27:43.441219 | orchestrator | Friday 17 April 2026 07:27:41 +0000 (0:00:03.828) 0:00:08.770 ********** 2026-04-17 07:27:43.441228 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:27:43.441257 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:27:43.441281 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:27:43.441289 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:27:43.441303 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:27:43.441362 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:27:43.441370 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:43.441380 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:43.441386 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:43.441398 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:48.133137 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:48.133279 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:48.133297 | orchestrator | 2026-04-17 07:27:48.133311 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-17 07:27:48.133371 | orchestrator | Friday 17 April 2026 07:27:44 +0000 (0:00:03.475) 0:00:12.246 ********** 2026-04-17 07:27:48.133383 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:27:48.133395 | orchestrator | 2026-04-17 07:27:48.133407 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-17 07:27:48.133418 | orchestrator | Friday 17 April 2026 07:27:45 +0000 (0:00:01.124) 0:00:13.370 ********** 2026-04-17 07:27:48.133428 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:27:48.133439 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:27:48.133450 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:27:48.133460 | orchestrator | 2026-04-17 07:27:48.133471 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-17 07:27:48.133482 | orchestrator | Friday 17 April 2026 07:27:47 +0000 (0:00:01.341) 0:00:14.711 ********** 2026-04-17 07:27:48.133509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:48.133526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:27:48.133539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:27:48.133578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:27:48.133592 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:27:48.133604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:48.133617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:27:48.133633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:27:48.133645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:27:48.133658 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:27:48.133681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:54.101957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:27:54.102099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:27:54.102114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:27:54.102124 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:27:54.102134 | orchestrator | 2026-04-17 07:27:54.102157 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-17 07:27:54.102167 | orchestrator | Friday 17 April 2026 07:27:49 +0000 (0:00:02.064) 0:00:16.776 ********** 2026-04-17 07:27:54.102176 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:27:54.102185 | orchestrator | 2026-04-17 07:27:54.102194 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-17 07:27:54.102202 | orchestrator | Friday 17 April 2026 07:27:51 +0000 (0:00:01.930) 0:00:18.706 ********** 2026-04-17 07:27:54.102218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:27:54.102263 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:27:54.102273 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:27:54.102282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:27:54.102295 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:27:54.102304 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:27:54.102320 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:54.102405 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:57.258796 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:57.258880 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:57.258891 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:57.258911 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:27:57.258941 | orchestrator | 2026-04-17 07:27:57.258954 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-17 07:27:57.258966 | orchestrator | Friday 17 April 2026 07:27:56 +0000 (0:00:05.213) 0:00:23.919 ********** 2026-04-17 07:27:57.258978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:57.259009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:27:57.259023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:57.259035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:27:57.259052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:27:57.259070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:27:57.259076 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:27:57.259084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:57.259098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:27:59.366325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:27:59.366529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:27:59.366560 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:27:59.366603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:27:59.366653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:27:59.366678 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:27:59.366698 | orchestrator | 2026-04-17 07:27:59.366717 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-17 07:27:59.366734 | orchestrator | Friday 17 April 2026 07:27:58 +0000 (0:00:02.273) 0:00:26.193 ********** 2026-04-17 07:27:59.366746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:59.366785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:27:59.366834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:59.366856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:27:59.366881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:27:59.366896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:27:59.366909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:27:59.366931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:28:04.391864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:28:04.391997 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:28:04.392029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:28:04.392042 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:28:04.392054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:28:04.392066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:28:04.392077 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:28:04.392089 | orchestrator | 2026-04-17 07:28:04.392100 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-17 07:28:04.392113 | orchestrator | Friday 17 April 2026 07:28:00 +0000 (0:00:02.131) 0:00:28.324 ********** 2026-04-17 07:28:04.392125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:04.392159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:04.392185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:04.392198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:04.392210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:04.392222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:04.392233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:04.392252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:13.311711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:13.311849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:13.311877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:13.311898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:13.311919 | orchestrator | 2026-04-17 07:28:13.311941 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-17 07:28:13.311961 | orchestrator | Friday 17 April 2026 07:28:06 +0000 (0:00:05.716) 0:00:34.041 ********** 2026-04-17 07:28:13.311982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:13.312068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:13.312084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:13.312097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:13.312109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:13.312121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:13.312140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:13.312164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:22.660785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:22.660889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:22.660905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:22.660916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:22.660928 | orchestrator | 2026-04-17 07:28:22.660941 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-17 07:28:22.660953 | orchestrator | Friday 17 April 2026 07:28:16 +0000 (0:00:10.168) 0:00:44.210 ********** 2026-04-17 07:28:22.660991 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:28:22.661003 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:28:22.661014 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:28:22.661025 | orchestrator | 2026-04-17 07:28:22.661036 | orchestrator | TASK [service-check-containers : aodh | Check containers] ********************** 2026-04-17 07:28:22.661047 | orchestrator | Friday 17 April 2026 07:28:19 +0000 (0:00:03.031) 0:00:47.241 ********** 2026-04-17 07:28:22.661059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:22.661107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:22.661121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:28:22.661133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:22.661154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:22.661165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-17 07:28:22.661189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:26.939425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:26.939536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:26.939553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:26.939565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:26.939603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-17 07:28:26.939616 | orchestrator | 2026-04-17 07:28:26.939630 | orchestrator | TASK [service-check-containers : aodh | Notify handlers to restart containers] *** 2026-04-17 07:28:26.939642 | orchestrator | Friday 17 April 2026 07:28:24 +0000 (0:00:05.122) 0:00:52.363 ********** 2026-04-17 07:28:26.939654 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:28:26.939666 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:28:26.939677 | orchestrator | } 2026-04-17 07:28:26.939688 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:28:26.939699 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:28:26.939710 | orchestrator | } 2026-04-17 07:28:26.939721 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:28:26.939731 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:28:26.939742 | orchestrator | } 2026-04-17 07:28:26.939753 | orchestrator | 2026-04-17 07:28:26.939764 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:28:26.939775 | orchestrator | Friday 17 April 2026 07:28:26 +0000 (0:00:01.637) 0:00:54.001 ********** 2026-04-17 07:28:26.939820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:28:26.939837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:28:26.939850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:28:26.939868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:28:26.939880 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:28:26.939893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:28:26.939926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:28:26.939958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:29:45.202778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:29:45.202899 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:29:45.202920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:29:45.202962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 07:29:45.202976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 07:29:45.203003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 07:29:45.203015 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:29:45.203026 | orchestrator | 2026-04-17 07:29:45.203038 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-17 07:29:45.203051 | orchestrator | Friday 17 April 2026 07:28:28 +0000 (0:00:02.092) 0:00:56.093 ********** 2026-04-17 07:29:45.203061 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:29:45.203072 | orchestrator | 2026-04-17 07:29:45.203083 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-17 07:29:45.203094 | orchestrator | Friday 17 April 2026 07:28:44 +0000 (0:00:16.457) 0:01:12.551 ********** 2026-04-17 07:29:45.203104 | orchestrator | 2026-04-17 07:29:45.203115 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-17 07:29:45.203126 | orchestrator | Friday 17 April 2026 07:28:45 +0000 (0:00:00.465) 0:01:13.016 ********** 2026-04-17 07:29:45.203136 | orchestrator | 2026-04-17 07:29:45.203165 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-17 07:29:45.203176 | orchestrator | Friday 17 April 2026 07:28:45 +0000 (0:00:00.468) 0:01:13.484 ********** 2026-04-17 07:29:45.203187 | orchestrator | 2026-04-17 07:29:45.203206 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-17 07:29:45.203217 | orchestrator | Friday 17 April 2026 07:28:46 +0000 (0:00:01.017) 0:01:14.502 ********** 2026-04-17 07:29:45.203228 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:29:45.203239 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:29:45.203250 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:29:45.203261 | orchestrator | 2026-04-17 07:29:45.203272 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-17 07:29:45.203283 | orchestrator | Friday 17 April 2026 07:29:00 +0000 (0:00:13.357) 0:01:27.859 ********** 2026-04-17 07:29:45.203294 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:29:45.203305 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:29:45.203316 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:29:45.203326 | orchestrator | 2026-04-17 07:29:45.203337 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-17 07:29:45.203348 | orchestrator | Friday 17 April 2026 07:29:13 +0000 (0:00:13.121) 0:01:40.980 ********** 2026-04-17 07:29:45.203393 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:29:45.203405 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:29:45.203420 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:29:45.203431 | orchestrator | 2026-04-17 07:29:45.203442 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-17 07:29:45.203453 | orchestrator | Friday 17 April 2026 07:29:26 +0000 (0:00:12.856) 0:01:53.837 ********** 2026-04-17 07:29:45.203464 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:29:45.203475 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:29:45.203485 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:29:45.203496 | orchestrator | 2026-04-17 07:29:45.203507 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:29:45.203519 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:29:45.203531 | orchestrator | testbed-node-1 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 07:29:45.203544 | orchestrator | testbed-node-2 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 07:29:45.203563 | orchestrator | 2026-04-17 07:29:45.203581 | orchestrator | 2026-04-17 07:29:45.203598 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:29:45.203616 | orchestrator | Friday 17 April 2026 07:29:44 +0000 (0:00:18.409) 0:02:12.247 ********** 2026-04-17 07:29:45.203634 | orchestrator | =============================================================================== 2026-04-17 07:29:45.203652 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 18.41s 2026-04-17 07:29:45.203669 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 16.46s 2026-04-17 07:29:45.203687 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 13.36s 2026-04-17 07:29:45.203703 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 13.12s 2026-04-17 07:29:45.203722 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 12.86s 2026-04-17 07:29:45.203740 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------ 10.17s 2026-04-17 07:29:45.203758 | orchestrator | aodh : Copying over config.json files for services ---------------------- 5.72s 2026-04-17 07:29:45.203775 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 5.21s 2026-04-17 07:29:45.203793 | orchestrator | service-check-containers : aodh | Check containers ---------------------- 5.12s 2026-04-17 07:29:45.203812 | orchestrator | aodh : include_tasks ---------------------------------------------------- 3.83s 2026-04-17 07:29:45.203828 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 3.48s 2026-04-17 07:29:45.203839 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 3.03s 2026-04-17 07:29:45.203860 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS certificate --- 2.27s 2026-04-17 07:29:45.203871 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 2.13s 2026-04-17 07:29:45.203882 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.09s 2026-04-17 07:29:45.203892 | orchestrator | aodh : Copying over existing policy file -------------------------------- 2.06s 2026-04-17 07:29:45.203910 | orchestrator | aodh : Flush handlers --------------------------------------------------- 1.95s 2026-04-17 07:29:45.203921 | orchestrator | aodh : include_tasks ---------------------------------------------------- 1.93s 2026-04-17 07:29:45.203932 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.76s 2026-04-17 07:29:45.203943 | orchestrator | service-check-containers : aodh | Notify handlers to restart containers --- 1.64s 2026-04-17 07:29:45.451126 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-17 07:29:45.504892 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 07:29:45.505012 | orchestrator | + osism apply -a bootstrap octavia 2026-04-17 07:29:46.978339 | orchestrator | 2026-04-17 07:29:46 | INFO  | Prepare task for execution of octavia. 2026-04-17 07:29:47.051487 | orchestrator | 2026-04-17 07:29:47 | INFO  | Task c2a53bf7-2169-47e4-8078-dbf4b8f9c051 (octavia) was prepared for execution. 2026-04-17 07:29:47.051579 | orchestrator | 2026-04-17 07:29:47 | INFO  | It takes a moment until task c2a53bf7-2169-47e4-8078-dbf4b8f9c051 (octavia) has been started and output is visible here. 2026-04-17 07:30:36.917277 | orchestrator | 2026-04-17 07:30:36.917436 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:30:36.917463 | orchestrator | 2026-04-17 07:30:36.917485 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:30:36.917504 | orchestrator | Friday 17 April 2026 07:29:52 +0000 (0:00:01.767) 0:00:01.767 ********** 2026-04-17 07:30:36.917523 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:30:36.917541 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:30:36.917562 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:30:36.917580 | orchestrator | 2026-04-17 07:30:36.917599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:30:36.917620 | orchestrator | Friday 17 April 2026 07:29:54 +0000 (0:00:02.035) 0:00:03.802 ********** 2026-04-17 07:30:36.917638 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-17 07:30:36.917658 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-17 07:30:36.917671 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-17 07:30:36.917682 | orchestrator | 2026-04-17 07:30:36.917692 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-17 07:30:36.917703 | orchestrator | 2026-04-17 07:30:36.917714 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 07:30:36.917726 | orchestrator | Friday 17 April 2026 07:29:58 +0000 (0:00:03.791) 0:00:07.594 ********** 2026-04-17 07:30:36.917738 | orchestrator | included: /ansible/roles/octavia/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:30:36.917749 | orchestrator | 2026-04-17 07:30:36.917760 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-17 07:30:36.917771 | orchestrator | Friday 17 April 2026 07:30:00 +0000 (0:00:02.617) 0:00:10.211 ********** 2026-04-17 07:30:36.917781 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:30:36.917792 | orchestrator | 2026-04-17 07:30:36.917802 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-17 07:30:36.917813 | orchestrator | Friday 17 April 2026 07:30:04 +0000 (0:00:03.597) 0:00:13.809 ********** 2026-04-17 07:30:36.917824 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:30:36.917834 | orchestrator | 2026-04-17 07:30:36.917845 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-17 07:30:36.917856 | orchestrator | Friday 17 April 2026 07:30:07 +0000 (0:00:03.307) 0:00:17.117 ********** 2026-04-17 07:30:36.917896 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:30:36.917908 | orchestrator | 2026-04-17 07:30:36.917919 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-17 07:30:36.917930 | orchestrator | Friday 17 April 2026 07:30:11 +0000 (0:00:03.285) 0:00:20.402 ********** 2026-04-17 07:30:36.917941 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:30:36.917951 | orchestrator | 2026-04-17 07:30:36.917962 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-17 07:30:36.917973 | orchestrator | Friday 17 April 2026 07:30:14 +0000 (0:00:03.673) 0:00:24.076 ********** 2026-04-17 07:30:36.917983 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:30:36.917995 | orchestrator | 2026-04-17 07:30:36.918005 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:30:36.918075 | orchestrator | testbed-node-0 : ok=8  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 07:30:36.918091 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 07:30:36.918104 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 07:30:36.918115 | orchestrator | 2026-04-17 07:30:36.918126 | orchestrator | 2026-04-17 07:30:36.918136 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:30:36.918147 | orchestrator | Friday 17 April 2026 07:30:36 +0000 (0:00:21.673) 0:00:45.749 ********** 2026-04-17 07:30:36.918158 | orchestrator | =============================================================================== 2026-04-17 07:30:36.918169 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.67s 2026-04-17 07:30:36.918179 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.79s 2026-04-17 07:30:36.918190 | orchestrator | octavia : Creating Octavia persistence database user and setting permissions --- 3.67s 2026-04-17 07:30:36.918201 | orchestrator | octavia : Creating Octavia database ------------------------------------- 3.60s 2026-04-17 07:30:36.918212 | orchestrator | octavia : Creating Octavia persistence database ------------------------- 3.31s 2026-04-17 07:30:36.918223 | orchestrator | octavia : Creating Octavia database user and setting permissions -------- 3.29s 2026-04-17 07:30:36.918246 | orchestrator | octavia : include_tasks ------------------------------------------------- 2.62s 2026-04-17 07:30:36.918258 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.04s 2026-04-17 07:30:37.159865 | orchestrator | + osism apply -a upgrade octavia 2026-04-17 07:30:38.534567 | orchestrator | 2026-04-17 07:30:38 | INFO  | Prepare task for execution of octavia. 2026-04-17 07:30:38.604024 | orchestrator | 2026-04-17 07:30:38 | INFO  | Task 77c4a9c7-1db2-433a-adef-56c0350d5b5e (octavia) was prepared for execution. 2026-04-17 07:30:38.604107 | orchestrator | 2026-04-17 07:30:38 | INFO  | It takes a moment until task 77c4a9c7-1db2-433a-adef-56c0350d5b5e (octavia) has been started and output is visible here. 2026-04-17 07:31:18.974727 | orchestrator | 2026-04-17 07:31:18.974842 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:31:18.974858 | orchestrator | 2026-04-17 07:31:18.974870 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:31:18.974882 | orchestrator | Friday 17 April 2026 07:30:43 +0000 (0:00:01.684) 0:00:01.684 ********** 2026-04-17 07:31:18.974894 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:31:18.974906 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:31:18.974917 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:31:18.974929 | orchestrator | 2026-04-17 07:31:18.974940 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:31:18.974952 | orchestrator | Friday 17 April 2026 07:30:45 +0000 (0:00:01.706) 0:00:03.391 ********** 2026-04-17 07:31:18.974963 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-17 07:31:18.975001 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-17 07:31:18.975014 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-17 07:31:18.975025 | orchestrator | 2026-04-17 07:31:18.975036 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-17 07:31:18.975048 | orchestrator | 2026-04-17 07:31:18.975059 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 07:31:18.975070 | orchestrator | Friday 17 April 2026 07:30:48 +0000 (0:00:02.806) 0:00:06.198 ********** 2026-04-17 07:31:18.975083 | orchestrator | included: /ansible/roles/octavia/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:31:18.975095 | orchestrator | 2026-04-17 07:31:18.975106 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 07:31:18.975118 | orchestrator | Friday 17 April 2026 07:30:50 +0000 (0:00:02.375) 0:00:08.573 ********** 2026-04-17 07:31:18.975129 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:31:18.975141 | orchestrator | 2026-04-17 07:31:18.975152 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-17 07:31:18.975163 | orchestrator | Friday 17 April 2026 07:30:53 +0000 (0:00:02.662) 0:00:11.236 ********** 2026-04-17 07:31:18.975174 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:31:18.975185 | orchestrator | 2026-04-17 07:31:18.975197 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-17 07:31:18.975208 | orchestrator | Friday 17 April 2026 07:30:58 +0000 (0:00:05.339) 0:00:16.576 ********** 2026-04-17 07:31:18.975219 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:31:18.975230 | orchestrator | 2026-04-17 07:31:18.975241 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-17 07:31:18.975252 | orchestrator | Friday 17 April 2026 07:31:03 +0000 (0:00:04.657) 0:00:21.233 ********** 2026-04-17 07:31:18.975264 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-17 07:31:18.975276 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-17 07:31:18.975288 | orchestrator | 2026-04-17 07:31:18.975299 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-17 07:31:18.975335 | orchestrator | Friday 17 April 2026 07:31:11 +0000 (0:00:08.116) 0:00:29.349 ********** 2026-04-17 07:31:18.975347 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:31:18.975358 | orchestrator | 2026-04-17 07:31:18.975369 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-17 07:31:18.975380 | orchestrator | Friday 17 April 2026 07:31:15 +0000 (0:00:04.440) 0:00:33.790 ********** 2026-04-17 07:31:18.975391 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:31:18.975402 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:31:18.975412 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:31:18.975423 | orchestrator | 2026-04-17 07:31:18.975434 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-17 07:31:18.975445 | orchestrator | Friday 17 April 2026 07:31:17 +0000 (0:00:01.333) 0:00:35.123 ********** 2026-04-17 07:31:18.975473 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:18.975518 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:18.975531 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:18.975544 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:18.975558 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:18.975569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:18.975586 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:18.975615 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:23.818869 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:23.818977 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:23.818994 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:23.819007 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:23.819019 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:23.819068 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:23.819099 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:23.819113 | orchestrator | 2026-04-17 07:31:23.819126 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-17 07:31:23.819138 | orchestrator | Friday 17 April 2026 07:31:20 +0000 (0:00:03.860) 0:00:38.984 ********** 2026-04-17 07:31:23.819150 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:31:23.819162 | orchestrator | 2026-04-17 07:31:23.819173 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-17 07:31:23.819184 | orchestrator | Friday 17 April 2026 07:31:22 +0000 (0:00:01.132) 0:00:40.117 ********** 2026-04-17 07:31:23.819195 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:31:23.819206 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:31:23.819216 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:31:23.819227 | orchestrator | 2026-04-17 07:31:23.819238 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-17 07:31:23.819249 | orchestrator | Friday 17 April 2026 07:31:23 +0000 (0:00:01.370) 0:00:41.487 ********** 2026-04-17 07:31:23.819260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:23.819277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:23.819376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:23.819398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:23.819420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:28.456704 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:31:28.456801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:28.456823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:28.456835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:28.456870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:28.456904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:28.456915 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:31:28.456941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:28.456953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:28.456963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:28.456973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:28.456991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:28.457006 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:31:28.457016 | orchestrator | 2026-04-17 07:31:28.457027 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 07:31:28.457038 | orchestrator | Friday 17 April 2026 07:31:25 +0000 (0:00:01.769) 0:00:43.257 ********** 2026-04-17 07:31:28.457049 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:31:28.457058 | orchestrator | 2026-04-17 07:31:28.457068 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-17 07:31:28.457078 | orchestrator | Friday 17 April 2026 07:31:26 +0000 (0:00:01.755) 0:00:45.012 ********** 2026-04-17 07:31:28.457096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:31.923761 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:31.923901 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:31.923991 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:31.924036 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:31.924056 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:31.924088 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:31.924102 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:31.924114 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:31.924134 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:31.924152 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:31.924163 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:31.924174 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:31.924194 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:33.786452 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:33.786586 | orchestrator | 2026-04-17 07:31:33.786603 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-17 07:31:33.786617 | orchestrator | Friday 17 April 2026 07:31:33 +0000 (0:00:06.244) 0:00:51.257 ********** 2026-04-17 07:31:33.786630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:33.786659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:33.786673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:33.786685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:33.786715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:33.786735 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:31:33.786749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:33.786761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:33.786778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:33.786789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:33.786801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:33.786812 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:31:33.786832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:35.440224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:35.440377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:35.440415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:35.440429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:35.440443 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:31:35.440457 | orchestrator | 2026-04-17 07:31:35.440470 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-17 07:31:35.440482 | orchestrator | Friday 17 April 2026 07:31:34 +0000 (0:00:01.720) 0:00:52.977 ********** 2026-04-17 07:31:35.440494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:35.440554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:35.440568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:35.440580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:35.440597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:35.440608 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:31:35.440620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:35.440640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:35.440659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:39.130582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:39.130689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:39.130706 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:31:39.130738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:31:39.130755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:31:39.130789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:31:39.130818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:31:39.130830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:31:39.130842 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:31:39.130853 | orchestrator | 2026-04-17 07:31:39.130866 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-17 07:31:39.130878 | orchestrator | Friday 17 April 2026 07:31:36 +0000 (0:00:01.760) 0:00:54.738 ********** 2026-04-17 07:31:39.130895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:39.130909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:39.130965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:31:39.130988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:49.810098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:49.810356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:31:49.810392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:31:49.810576 | orchestrator | 2026-04-17 07:31:49.810603 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-17 07:31:49.810637 | orchestrator | Friday 17 April 2026 07:31:43 +0000 (0:00:06.607) 0:01:01.346 ********** 2026-04-17 07:31:49.810654 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 07:31:49.810673 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 07:31:49.810689 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 07:31:49.810706 | orchestrator | 2026-04-17 07:31:49.810725 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-17 07:31:49.810742 | orchestrator | Friday 17 April 2026 07:31:45 +0000 (0:00:02.706) 0:01:04.052 ********** 2026-04-17 07:31:49.810774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:32:03.935231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:32:03.935392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:32:03.935412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:32:03.935427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:32:03.935438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:32:03.935467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:03.935481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:03.935507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:03.935519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:03.935530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:03.935541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:03.935553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:32:03.935572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:32:29.490157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:32:29.490340 | orchestrator | 2026-04-17 07:32:29.490361 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-17 07:32:29.490375 | orchestrator | Friday 17 April 2026 07:32:05 +0000 (0:00:19.130) 0:01:23.183 ********** 2026-04-17 07:32:29.490386 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:32:29.490398 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:32:29.490409 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:32:29.490419 | orchestrator | 2026-04-17 07:32:29.490430 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-17 07:32:29.490441 | orchestrator | Friday 17 April 2026 07:32:07 +0000 (0:00:02.713) 0:01:25.897 ********** 2026-04-17 07:32:29.490453 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490464 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490474 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490485 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490496 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490506 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490517 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490527 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490538 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490549 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490559 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490571 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490582 | orchestrator | 2026-04-17 07:32:29.490593 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-17 07:32:29.490604 | orchestrator | Friday 17 April 2026 07:32:13 +0000 (0:00:06.091) 0:01:31.988 ********** 2026-04-17 07:32:29.490614 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490625 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490636 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490646 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490657 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490668 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490678 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490688 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490699 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490710 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490720 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490731 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490741 | orchestrator | 2026-04-17 07:32:29.490752 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-17 07:32:29.490762 | orchestrator | Friday 17 April 2026 07:32:20 +0000 (0:00:06.179) 0:01:38.168 ********** 2026-04-17 07:32:29.490773 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490784 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490802 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 07:32:29.490813 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490824 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490834 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 07:32:29.490845 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490855 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490866 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 07:32:29.490876 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490887 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490897 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 07:32:29.490908 | orchestrator | 2026-04-17 07:32:29.490918 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-17 07:32:29.490929 | orchestrator | Friday 17 April 2026 07:32:26 +0000 (0:00:06.806) 0:01:44.974 ********** 2026-04-17 07:32:29.490967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:32:29.490985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:32:29.490998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 07:32:29.491018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:32:29.491030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:32:29.491049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 07:32:35.096627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 07:32:35.096888 | orchestrator | 2026-04-17 07:32:35.096899 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-17 07:32:35.096910 | orchestrator | Friday 17 April 2026 07:32:33 +0000 (0:00:06.376) 0:01:51.351 ********** 2026-04-17 07:32:35.096921 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:32:35.096940 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:32:35.096950 | orchestrator | } 2026-04-17 07:32:35.096960 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:32:35.096969 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:32:35.096979 | orchestrator | } 2026-04-17 07:32:35.096988 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:32:35.096998 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:32:35.097007 | orchestrator | } 2026-04-17 07:32:35.097017 | orchestrator | 2026-04-17 07:32:35.097027 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:32:35.097037 | orchestrator | Friday 17 April 2026 07:32:34 +0000 (0:00:01.430) 0:01:52.782 ********** 2026-04-17 07:32:35.097047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:32:35.097061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:32:35.097086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:32:35.356005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:32:35.356103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:32:35.356143 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:32:35.356158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:32:35.356174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:32:35.356187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:32:35.356231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:32:35.356244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:32:35.356255 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:32:35.356325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 07:32:35.356346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 07:32:35.356357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 07:32:35.356369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 07:32:35.356392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 07:34:15.971807 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:34:15.971934 | orchestrator | 2026-04-17 07:34:15.971960 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-17 07:34:15.971983 | orchestrator | Friday 17 April 2026 07:32:37 +0000 (0:00:02.316) 0:01:55.098 ********** 2026-04-17 07:34:15.972002 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:34:15.972021 | orchestrator | 2026-04-17 07:34:15.972040 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 07:34:15.972060 | orchestrator | Friday 17 April 2026 07:32:49 +0000 (0:00:12.918) 0:02:08.017 ********** 2026-04-17 07:34:15.972111 | orchestrator | 2026-04-17 07:34:15.972129 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 07:34:15.972140 | orchestrator | Friday 17 April 2026 07:32:50 +0000 (0:00:00.482) 0:02:08.500 ********** 2026-04-17 07:34:15.972150 | orchestrator | 2026-04-17 07:34:15.972161 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 07:34:15.972171 | orchestrator | Friday 17 April 2026 07:32:50 +0000 (0:00:00.458) 0:02:08.958 ********** 2026-04-17 07:34:15.972182 | orchestrator | 2026-04-17 07:34:15.972192 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-17 07:34:15.972203 | orchestrator | Friday 17 April 2026 07:32:51 +0000 (0:00:00.817) 0:02:09.775 ********** 2026-04-17 07:34:15.972213 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:34:15.972224 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:34:15.972235 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:34:15.972245 | orchestrator | 2026-04-17 07:34:15.972311 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-17 07:34:15.972323 | orchestrator | Friday 17 April 2026 07:33:10 +0000 (0:00:19.049) 0:02:28.825 ********** 2026-04-17 07:34:15.972334 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:34:15.972347 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:34:15.972359 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:34:15.972372 | orchestrator | 2026-04-17 07:34:15.972384 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-17 07:34:15.972397 | orchestrator | Friday 17 April 2026 07:33:25 +0000 (0:00:14.393) 0:02:43.218 ********** 2026-04-17 07:34:15.972409 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:34:15.972422 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:34:15.972434 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:34:15.972447 | orchestrator | 2026-04-17 07:34:15.972459 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-17 07:34:15.972471 | orchestrator | Friday 17 April 2026 07:33:38 +0000 (0:00:13.437) 0:02:56.656 ********** 2026-04-17 07:34:15.972484 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:34:15.972496 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:34:15.972509 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:34:15.972519 | orchestrator | 2026-04-17 07:34:15.972530 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-17 07:34:15.972541 | orchestrator | Friday 17 April 2026 07:33:51 +0000 (0:00:12.839) 0:03:09.496 ********** 2026-04-17 07:34:15.972551 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:34:15.972562 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:34:15.972572 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:34:15.972583 | orchestrator | 2026-04-17 07:34:15.972593 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:34:15.972605 | orchestrator | testbed-node-0 : ok=27  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:34:15.972618 | orchestrator | testbed-node-1 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 07:34:15.972629 | orchestrator | testbed-node-2 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 07:34:15.972639 | orchestrator | 2026-04-17 07:34:15.972650 | orchestrator | 2026-04-17 07:34:15.972661 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:34:15.972671 | orchestrator | Friday 17 April 2026 07:34:15 +0000 (0:00:24.075) 0:03:33.571 ********** 2026-04-17 07:34:15.972682 | orchestrator | =============================================================================== 2026-04-17 07:34:15.972692 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 24.07s 2026-04-17 07:34:15.972703 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 19.13s 2026-04-17 07:34:15.972726 | orchestrator | octavia : Restart octavia-api container -------------------------------- 19.05s 2026-04-17 07:34:15.972737 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 14.39s 2026-04-17 07:34:15.972747 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 13.44s 2026-04-17 07:34:15.972758 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 12.92s 2026-04-17 07:34:15.972768 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 12.84s 2026-04-17 07:34:15.972779 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.12s 2026-04-17 07:34:15.972789 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.81s 2026-04-17 07:34:15.972800 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.61s 2026-04-17 07:34:15.972810 | orchestrator | service-check-containers : octavia | Check containers ------------------- 6.37s 2026-04-17 07:34:15.972834 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.25s 2026-04-17 07:34:15.972845 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.18s 2026-04-17 07:34:15.972856 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 6.09s 2026-04-17 07:34:15.972884 | orchestrator | octavia : Get amphora flavor info --------------------------------------- 5.34s 2026-04-17 07:34:15.972896 | orchestrator | octavia : Get service project id ---------------------------------------- 4.66s 2026-04-17 07:34:15.972907 | orchestrator | octavia : Get loadbalancer management network --------------------------- 4.44s 2026-04-17 07:34:15.972917 | orchestrator | octavia : Ensuring config directories exist ----------------------------- 3.86s 2026-04-17 07:34:15.972928 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.81s 2026-04-17 07:34:15.972939 | orchestrator | octavia : Copying over Octavia SSH key ---------------------------------- 2.71s 2026-04-17 07:34:16.177916 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-17 07:34:16.178082 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/310-openstack-extended.sh 2026-04-17 07:34:17.563911 | orchestrator | 2026-04-17 07:34:17 | INFO  | Prepare task for execution of gnocchi. 2026-04-17 07:34:17.634070 | orchestrator | 2026-04-17 07:34:17 | INFO  | Task f2095bca-bc10-49d0-b556-8beec131544b (gnocchi) was prepared for execution. 2026-04-17 07:34:17.634158 | orchestrator | 2026-04-17 07:34:17 | INFO  | It takes a moment until task f2095bca-bc10-49d0-b556-8beec131544b (gnocchi) has been started and output is visible here. 2026-04-17 07:34:29.945017 | orchestrator | 2026-04-17 07:34:29.945130 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:34:29.945147 | orchestrator | 2026-04-17 07:34:29.945159 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:34:29.945171 | orchestrator | Friday 17 April 2026 07:34:22 +0000 (0:00:01.751) 0:00:01.751 ********** 2026-04-17 07:34:29.945182 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:34:29.945215 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:34:29.945237 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:34:29.945249 | orchestrator | 2026-04-17 07:34:29.945314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:34:29.945327 | orchestrator | Friday 17 April 2026 07:34:24 +0000 (0:00:01.872) 0:00:03.624 ********** 2026-04-17 07:34:29.945338 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-17 07:34:29.945350 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-17 07:34:29.945361 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-17 07:34:29.945373 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-17 07:34:29.945385 | orchestrator | 2026-04-17 07:34:29.945396 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-17 07:34:29.945406 | orchestrator | skipping: no hosts matched 2026-04-17 07:34:29.945418 | orchestrator | 2026-04-17 07:34:29.945456 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:34:29.945469 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 07:34:29.945481 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 07:34:29.945492 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 07:34:29.945502 | orchestrator | 2026-04-17 07:34:29.945513 | orchestrator | 2026-04-17 07:34:29.945524 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:34:29.945535 | orchestrator | Friday 17 April 2026 07:34:29 +0000 (0:00:04.948) 0:00:08.573 ********** 2026-04-17 07:34:29.945545 | orchestrator | =============================================================================== 2026-04-17 07:34:29.945556 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.95s 2026-04-17 07:34:29.945569 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.87s 2026-04-17 07:34:31.480869 | orchestrator | 2026-04-17 07:34:31 | INFO  | Prepare task for execution of manila. 2026-04-17 07:34:31.545402 | orchestrator | 2026-04-17 07:34:31 | INFO  | Task 15aa1ad1-4e07-4da7-b623-5c141cf35505 (manila) was prepared for execution. 2026-04-17 07:34:31.545499 | orchestrator | 2026-04-17 07:34:31 | INFO  | It takes a moment until task 15aa1ad1-4e07-4da7-b623-5c141cf35505 (manila) has been started and output is visible here. 2026-04-17 07:34:45.862469 | orchestrator | 2026-04-17 07:34:45.862579 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:34:45.862595 | orchestrator | 2026-04-17 07:34:45.862607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:34:45.862618 | orchestrator | Friday 17 April 2026 07:34:36 +0000 (0:00:01.622) 0:00:01.622 ********** 2026-04-17 07:34:45.862629 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:34:45.862641 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:34:45.862652 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:34:45.862662 | orchestrator | 2026-04-17 07:34:45.862674 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:34:45.862684 | orchestrator | Friday 17 April 2026 07:34:38 +0000 (0:00:01.893) 0:00:03.516 ********** 2026-04-17 07:34:45.862695 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-17 07:34:45.862706 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-17 07:34:45.862735 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-17 07:34:45.862747 | orchestrator | 2026-04-17 07:34:45.862757 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-17 07:34:45.862768 | orchestrator | 2026-04-17 07:34:45.862779 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-17 07:34:45.862790 | orchestrator | Friday 17 April 2026 07:34:40 +0000 (0:00:02.253) 0:00:05.769 ********** 2026-04-17 07:34:45.862801 | orchestrator | included: /ansible/roles/manila/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:34:45.862812 | orchestrator | 2026-04-17 07:34:45.862823 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-17 07:34:45.862834 | orchestrator | Friday 17 April 2026 07:34:43 +0000 (0:00:03.291) 0:00:09.061 ********** 2026-04-17 07:34:45.862848 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:34:45.862891 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:34:45.862903 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:34:45.862934 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:34:45.862954 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:34:45.862966 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:34:45.862985 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:34:45.862998 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:34:45.863011 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:34:45.863034 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:03.725503 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:03.725621 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:03.725658 | orchestrator | 2026-04-17 07:35:03.725672 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-17 07:35:03.725685 | orchestrator | Friday 17 April 2026 07:34:47 +0000 (0:00:03.180) 0:00:12.241 ********** 2026-04-17 07:35:03.725696 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:35:03.725708 | orchestrator | 2026-04-17 07:35:03.725719 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-17 07:35:03.725729 | orchestrator | Friday 17 April 2026 07:34:48 +0000 (0:00:01.865) 0:00:14.107 ********** 2026-04-17 07:35:03.725740 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:35:03.725752 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:35:03.725763 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:35:03.725774 | orchestrator | 2026-04-17 07:35:03.725785 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-17 07:35:03.725796 | orchestrator | Friday 17 April 2026 07:34:51 +0000 (0:00:02.114) 0:00:16.221 ********** 2026-04-17 07:35:03.725808 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 07:35:03.725820 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 07:35:03.725831 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 07:35:03.725842 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 07:35:03.725853 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 07:35:03.725864 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 07:35:03.725875 | orchestrator | 2026-04-17 07:35:03.725886 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-17 07:35:03.725896 | orchestrator | Friday 17 April 2026 07:34:53 +0000 (0:00:02.515) 0:00:18.737 ********** 2026-04-17 07:35:03.725907 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 07:35:03.725918 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 07:35:03.725930 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 07:35:03.725941 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 07:35:03.725969 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-17 07:35:03.725987 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-17 07:35:03.726008 | orchestrator | 2026-04-17 07:35:03.726087 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-17 07:35:03.726101 | orchestrator | Friday 17 April 2026 07:34:55 +0000 (0:00:02.340) 0:00:21.077 ********** 2026-04-17 07:35:03.726114 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-17 07:35:03.726160 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-17 07:35:03.726175 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-17 07:35:03.726188 | orchestrator | 2026-04-17 07:35:03.726199 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-17 07:35:03.726210 | orchestrator | Friday 17 April 2026 07:34:57 +0000 (0:00:01.923) 0:00:23.000 ********** 2026-04-17 07:35:03.726221 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:35:03.726233 | orchestrator | 2026-04-17 07:35:03.726244 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-17 07:35:03.726255 | orchestrator | Friday 17 April 2026 07:34:58 +0000 (0:00:01.116) 0:00:24.117 ********** 2026-04-17 07:35:03.726301 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:35:03.726313 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:35:03.726324 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:35:03.726335 | orchestrator | 2026-04-17 07:35:03.726346 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-17 07:35:03.726357 | orchestrator | Friday 17 April 2026 07:35:00 +0000 (0:00:01.523) 0:00:25.640 ********** 2026-04-17 07:35:03.726368 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:35:03.726379 | orchestrator | 2026-04-17 07:35:03.726389 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-17 07:35:03.726400 | orchestrator | Friday 17 April 2026 07:35:02 +0000 (0:00:01.822) 0:00:27.463 ********** 2026-04-17 07:35:03.726413 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:03.726427 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:03.726456 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:07.845172 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845325 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845343 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845357 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845370 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845404 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845448 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845461 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845473 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:07.845484 | orchestrator | 2026-04-17 07:35:07.845497 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-17 07:35:07.845509 | orchestrator | Friday 17 April 2026 07:35:07 +0000 (0:00:04.997) 0:00:32.461 ********** 2026-04-17 07:35:07.845522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:07.845542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:07.845567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:10.047163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:10.047347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047383 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:35:10.047397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047467 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:35:10.047479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047508 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:35:10.047520 | orchestrator | 2026-04-17 07:35:10.047532 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-17 07:35:10.047543 | orchestrator | Friday 17 April 2026 07:35:09 +0000 (0:00:02.198) 0:00:34.659 ********** 2026-04-17 07:35:10.047555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:10.047572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:10.047592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:13.370417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:13.370521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:13.370562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:13.370589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:13.370603 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:35:13.370617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:13.370646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:13.370660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:13.370679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:13.370690 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:35:13.370702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:13.370713 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:35:13.370724 | orchestrator | 2026-04-17 07:35:13.370736 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-17 07:35:13.370748 | orchestrator | Friday 17 April 2026 07:35:11 +0000 (0:00:02.395) 0:00:37.055 ********** 2026-04-17 07:35:13.370765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:13.370785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:19.830362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:19.830501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:19.830735 | orchestrator | 2026-04-17 07:35:19.830756 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-17 07:35:19.830775 | orchestrator | Friday 17 April 2026 07:35:17 +0000 (0:00:05.358) 0:00:42.413 ********** 2026-04-17 07:35:19.830795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:19.830831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:30.810789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:30.810897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:30.810931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:30.810945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:30.810957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:30.811006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:30.811018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:30.811030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:30.811047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:30.811058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:30.811070 | orchestrator | 2026-04-17 07:35:30.811083 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-17 07:35:30.811095 | orchestrator | Friday 17 April 2026 07:35:25 +0000 (0:00:08.015) 0:00:50.429 ********** 2026-04-17 07:35:30.811106 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-17 07:35:30.811125 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-17 07:35:30.811136 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-17 07:35:30.811147 | orchestrator | 2026-04-17 07:35:30.811158 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-17 07:35:30.811169 | orchestrator | Friday 17 April 2026 07:35:30 +0000 (0:00:04.888) 0:00:55.317 ********** 2026-04-17 07:35:30.811187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:33.943389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943541 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:35:33.943555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:33.943591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943645 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:35:33.943656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:33.943673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:33.943715 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:35:33.943726 | orchestrator | 2026-04-17 07:35:33.943738 | orchestrator | TASK [service-check-containers : manila | Check containers] ******************** 2026-04-17 07:35:33.943750 | orchestrator | Friday 17 April 2026 07:35:32 +0000 (0:00:02.405) 0:00:57.723 ********** 2026-04-17 07:35:33.943769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:37.816440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:37.816577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:35:37.816616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-17 07:35:37.816766 | orchestrator | 2026-04-17 07:35:37.816779 | orchestrator | TASK [service-check-containers : manila | Notify handlers to restart containers] *** 2026-04-17 07:35:37.816792 | orchestrator | Friday 17 April 2026 07:35:37 +0000 (0:00:04.946) 0:01:02.670 ********** 2026-04-17 07:35:37.816804 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:35:37.816817 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:35:37.816828 | orchestrator | } 2026-04-17 07:35:37.816839 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:35:37.816850 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:35:37.816861 | orchestrator | } 2026-04-17 07:35:37.816872 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:35:37.816890 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:35:39.784806 | orchestrator | } 2026-04-17 07:35:39.784905 | orchestrator | 2026-04-17 07:35:39.784922 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:35:39.784935 | orchestrator | Friday 17 April 2026 07:35:38 +0000 (0:00:01.452) 0:01:04.123 ********** 2026-04-17 07:35:39.784967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:39.785004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:39.785019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:39.785031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:39.785043 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:35:39.785075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:39.785087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:39.785112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:35:39.785123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:35:39.785135 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:35:39.785146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:35:39.785157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 07:35:39.785176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 07:39:12.651795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 07:39:12.651950 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:39:12.651971 | orchestrator | 2026-04-17 07:39:12.651984 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-17 07:39:12.651997 | orchestrator | Friday 17 April 2026 07:35:41 +0000 (0:00:02.475) 0:01:06.598 ********** 2026-04-17 07:39:12.652007 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:39:12.652018 | orchestrator | 2026-04-17 07:39:12.652050 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-17 07:39:12.652061 | orchestrator | Friday 17 April 2026 07:36:00 +0000 (0:00:19.424) 0:01:26.023 ********** 2026-04-17 07:39:12.652072 | orchestrator | 2026-04-17 07:39:12.652083 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-17 07:39:12.652093 | orchestrator | Friday 17 April 2026 07:36:01 +0000 (0:00:00.423) 0:01:26.447 ********** 2026-04-17 07:39:12.652104 | orchestrator | 2026-04-17 07:39:12.652114 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-17 07:39:12.652125 | orchestrator | Friday 17 April 2026 07:36:01 +0000 (0:00:00.415) 0:01:26.862 ********** 2026-04-17 07:39:12.652135 | orchestrator | 2026-04-17 07:39:12.652145 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-17 07:39:12.652156 | orchestrator | Friday 17 April 2026 07:36:02 +0000 (0:00:00.843) 0:01:27.705 ********** 2026-04-17 07:39:12.652166 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:39:12.652177 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:39:12.652188 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:39:12.652198 | orchestrator | 2026-04-17 07:39:12.652209 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-17 07:39:12.652219 | orchestrator | Friday 17 April 2026 07:36:19 +0000 (0:00:16.970) 0:01:44.676 ********** 2026-04-17 07:39:12.652230 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:39:12.652241 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:39:12.652287 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:39:12.652300 | orchestrator | 2026-04-17 07:39:12.652313 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-17 07:39:12.652325 | orchestrator | Friday 17 April 2026 07:36:37 +0000 (0:00:18.401) 0:02:03.078 ********** 2026-04-17 07:39:12.652337 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:39:12.652350 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:39:12.652362 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:39:12.652374 | orchestrator | 2026-04-17 07:39:12.652386 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-17 07:39:12.652398 | orchestrator | Friday 17 April 2026 07:36:50 +0000 (0:00:12.654) 0:02:15.732 ********** 2026-04-17 07:39:12.652410 | orchestrator | 2026-04-17 07:39:12.652423 | orchestrator | STILL ALIVE [task 'manila : Restart manila-share container' is running] ******** 2026-04-17 07:39:12.652436 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:39:12.652449 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:39:12.652461 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:39:12.652473 | orchestrator | 2026-04-17 07:39:12.652486 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:39:12.652500 | orchestrator | testbed-node-0 : ok=21  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:39:12.652515 | orchestrator | testbed-node-1 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 07:39:12.652557 | orchestrator | testbed-node-2 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 07:39:12.652569 | orchestrator | 2026-04-17 07:39:12.652581 | orchestrator | 2026-04-17 07:39:12.652594 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:39:12.652608 | orchestrator | Friday 17 April 2026 07:39:12 +0000 (0:02:21.601) 0:04:37.334 ********** 2026-04-17 07:39:12.652620 | orchestrator | =============================================================================== 2026-04-17 07:39:12.652632 | orchestrator | manila : Restart manila-share container ------------------------------- 141.60s 2026-04-17 07:39:12.652645 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 19.42s 2026-04-17 07:39:12.652656 | orchestrator | manila : Restart manila-data container --------------------------------- 18.40s 2026-04-17 07:39:12.652667 | orchestrator | manila : Restart manila-api container ---------------------------------- 16.97s 2026-04-17 07:39:12.652677 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 12.65s 2026-04-17 07:39:12.652687 | orchestrator | manila : Copying over manila.conf --------------------------------------- 8.02s 2026-04-17 07:39:12.652698 | orchestrator | manila : Copying over config.json files for services -------------------- 5.36s 2026-04-17 07:39:12.652708 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 5.00s 2026-04-17 07:39:12.652719 | orchestrator | service-check-containers : manila | Check containers -------------------- 4.95s 2026-04-17 07:39:12.652749 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.89s 2026-04-17 07:39:12.652760 | orchestrator | manila : include_tasks -------------------------------------------------- 3.29s 2026-04-17 07:39:12.652771 | orchestrator | manila : Ensuring config directories exist ------------------------------ 3.18s 2026-04-17 07:39:12.652782 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 2.51s 2026-04-17 07:39:12.652792 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.47s 2026-04-17 07:39:12.652802 | orchestrator | manila : Copying over existing policy file ------------------------------ 2.41s 2026-04-17 07:39:12.652813 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS key ------ 2.40s 2026-04-17 07:39:12.652823 | orchestrator | manila : Copy over ceph Manila keyrings --------------------------------- 2.34s 2026-04-17 07:39:12.652834 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.25s 2026-04-17 07:39:12.652844 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS certificate --- 2.20s 2026-04-17 07:39:12.652856 | orchestrator | manila : Ensuring manila service ceph config subdir exists -------------- 2.11s 2026-04-17 07:39:12.859862 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-17 07:39:12.859999 | orchestrator | + osism migrate rabbitmq3to4 delete 2026-04-17 07:39:19.414833 | orchestrator | 2026-04-17 07:39:19 | ERROR  | Unable to get ansible vault password 2026-04-17 07:39:19.414975 | orchestrator | 2026-04-17 07:39:19 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-17 07:39:19.414993 | orchestrator | 2026-04-17 07:39:19 | ERROR  | Dropping encrypted entries 2026-04-17 07:39:19.448988 | orchestrator | 2026-04-17 07:39:19 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-17 07:39:19.696462 | orchestrator | 2026-04-17 07:39:19 | INFO  | Found 127 classic queue(s) in vhost '/' 2026-04-17 07:39:19.745499 | orchestrator | 2026-04-17 07:39:19 | INFO  | Deleted queue: alarm.all.sample 2026-04-17 07:39:19.790832 | orchestrator | 2026-04-17 07:39:19 | INFO  | Deleted queue: alarming.sample 2026-04-17 07:39:19.835855 | orchestrator | 2026-04-17 07:39:19 | INFO  | Deleted queue: barbican.workers 2026-04-17 07:39:19.891854 | orchestrator | 2026-04-17 07:39:19 | INFO  | Deleted queue: barbican.workers.barbican.queue 2026-04-17 07:39:19.921576 | orchestrator | 2026-04-17 07:39:19 | INFO  | Deleted queue: barbican.workers_fanout_20a058ccedf94e3eb50598d76ab757db 2026-04-17 07:39:19.954695 | orchestrator | 2026-04-17 07:39:19 | INFO  | Deleted queue: barbican.workers_fanout_6e063d46835a4e11afca4900bcdaf99e 2026-04-17 07:39:20.000410 | orchestrator | 2026-04-17 07:39:19 | INFO  | Deleted queue: barbican.workers_fanout_8ad5bbb91d084f8d91b232a35d94a57d 2026-04-17 07:39:20.073084 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: barbican_notifications.info 2026-04-17 07:39:20.145859 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central 2026-04-17 07:39:20.180127 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central.testbed-node-0 2026-04-17 07:39:20.222078 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central.testbed-node-1 2026-04-17 07:39:20.267648 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central.testbed-node-2 2026-04-17 07:39:20.305638 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central_fanout_1d3f05b198cd48578310d7621ff0c8fe 2026-04-17 07:39:20.341453 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central_fanout_993e28f14edb4e3aab4835bf990259c4 2026-04-17 07:39:20.374646 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central_fanout_b84558daeb244be688f86c8b3badd6af 2026-04-17 07:39:20.419864 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central_fanout_dca8cef115594b468bf3a305f1a6295b 2026-04-17 07:39:20.445094 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central_fanout_f972dd7ad31146debaefdfa79aae1b67 2026-04-17 07:39:20.484340 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: central_fanout_fd6194b9a6854c94bf97e77ab30a4164 2026-04-17 07:39:20.524866 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-backup 2026-04-17 07:39:20.576853 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-backup.testbed-node-0 2026-04-17 07:39:20.615597 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-backup.testbed-node-1 2026-04-17 07:39:20.668756 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-backup.testbed-node-2 2026-04-17 07:39:20.723773 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-scheduler 2026-04-17 07:39:20.764998 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-scheduler.testbed-node-0 2026-04-17 07:39:20.807396 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-scheduler.testbed-node-1 2026-04-17 07:39:20.858176 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-scheduler.testbed-node-2 2026-04-17 07:39:20.895899 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-volume 2026-04-17 07:39:20.942730 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes 2026-04-17 07:39:20.991190 | orchestrator | 2026-04-17 07:39:20 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 2026-04-17 07:39:21.026666 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes 2026-04-17 07:39:21.066907 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 2026-04-17 07:39:21.098435 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes 2026-04-17 07:39:21.142789 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 2026-04-17 07:39:21.184014 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: compute 2026-04-17 07:39:21.223532 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: compute.testbed-node-3 2026-04-17 07:39:21.267720 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: compute.testbed-node-4 2026-04-17 07:39:21.321684 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: compute.testbed-node-5 2026-04-17 07:39:21.375217 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: conductor 2026-04-17 07:39:21.418924 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: conductor.testbed-node-0 2026-04-17 07:39:21.461561 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: conductor.testbed-node-1 2026-04-17 07:39:21.502457 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: conductor.testbed-node-2 2026-04-17 07:39:21.556156 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: event.sample 2026-04-17 07:39:21.587930 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.10:55776 -> 192.168.16.11:5672 2026-04-17 07:39:21.603977 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.12:47930 -> 192.168.16.10:5672 2026-04-17 07:39:21.617938 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.11:32822 -> 192.168.16.10:5672 2026-04-17 07:39:21.640180 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.12:43236 -> 192.168.16.11:5672 2026-04-17 07:39:21.653312 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.11:60962 -> 192.168.16.10:5672 2026-04-17 07:39:21.672338 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.12:54786 -> 192.168.16.10:5672 2026-04-17 07:39:21.691101 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.10:41304 -> 192.168.16.10:5672 2026-04-17 07:39:21.709692 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.11:36256 -> 192.168.16.11:5672 2026-04-17 07:39:21.727035 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed connection: 192.168.16.10:41430 -> 192.168.16.10:5672 2026-04-17 07:39:21.727580 | orchestrator | 2026-04-17 07:39:21 | INFO  | Closed 9 connection(s) for queue: magnum-conductor 2026-04-17 07:39:21.752224 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: magnum-conductor 2026-04-17 07:39:21.799509 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: magnum-conductor.aync76lm2t54 2026-04-17 07:39:21.842002 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: magnum-conductor.dzwct34rcrrw 2026-04-17 07:39:21.896038 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: magnum-conductor.s3ymcvio2asi 2026-04-17 07:39:21.927492 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: magnum-conductor_fanout_11601765e7c442fd998593624dd5766b 2026-04-17 07:39:21.954456 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: magnum-conductor_fanout_28d9c33e2e444996b3de7e9169a4ec20 2026-04-17 07:39:21.993763 | orchestrator | 2026-04-17 07:39:21 | INFO  | Deleted queue: magnum-conductor_fanout_333db82f226d4c8a9b2d64d5a6fefd7e 2026-04-17 07:39:22.052988 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: magnum-conductor_fanout_55ff3626d6574cd4852f07888b0404ea 2026-04-17 07:39:22.077511 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: magnum-conductor_fanout_884a2875b69e49c1b7c59df2bc594c6e 2026-04-17 07:39:22.120578 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: magnum-conductor_fanout_90e197aecc46435a8911ea489b96df7c 2026-04-17 07:39:22.157269 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: magnum-conductor_fanout_b4258b3a1e9944fea5e6e830a4ad9242 2026-04-17 07:39:22.197454 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: magnum-conductor_fanout_c75d781f355d4e3d967594d8b67236e4 2026-04-17 07:39:22.244428 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: magnum-conductor_fanout_e5e7b4f6d9b64be4b73b5d38b4f4e3cb 2026-04-17 07:39:22.284285 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-data 2026-04-17 07:39:22.322392 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-data.testbed-node-0 2026-04-17 07:39:22.364406 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-data.testbed-node-1 2026-04-17 07:39:22.404607 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-data.testbed-node-2 2026-04-17 07:39:22.449946 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-scheduler 2026-04-17 07:39:22.489634 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-scheduler.testbed-node-0 2026-04-17 07:39:22.535022 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-scheduler.testbed-node-1 2026-04-17 07:39:22.574849 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-scheduler.testbed-node-2 2026-04-17 07:39:22.619420 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-share 2026-04-17 07:39:22.670933 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-share.testbed-node-0@cephfsnative1 2026-04-17 07:39:22.714831 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-share.testbed-node-1@cephfsnative1 2026-04-17 07:39:22.756070 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-share.testbed-node-2@cephfsnative1 2026-04-17 07:39:22.789726 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-share_fanout_2812a2c7dca14caab7b2abbeda168aae 2026-04-17 07:39:22.823871 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-share_fanout_283d9e86d6954bd7856fda8f0fd341f9 2026-04-17 07:39:22.868339 | orchestrator | 2026-04-17 07:39:22 | INFO  | Deleted queue: manila-share_fanout_d5aec83b3f244692ae60d87f1eacc951 2026-04-17 07:39:23.002663 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: notifications.audit 2026-04-17 07:39:23.143051 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: notifications.critical 2026-04-17 07:39:23.276428 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: notifications.debug 2026-04-17 07:39:23.417996 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: notifications.error 2026-04-17 07:39:23.540965 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: notifications.info 2026-04-17 07:39:23.686872 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: notifications.sample 2026-04-17 07:39:23.857156 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: notifications.warn 2026-04-17 07:39:23.900850 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: octavia_provisioning_v2 2026-04-17 07:39:23.947060 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-0 2026-04-17 07:39:23.995633 | orchestrator | 2026-04-17 07:39:23 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-1 2026-04-17 07:39:24.037878 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-2 2026-04-17 07:39:24.081506 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer 2026-04-17 07:39:24.127394 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer.testbed-node-0 2026-04-17 07:39:24.164157 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer.testbed-node-1 2026-04-17 07:39:24.207135 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer.testbed-node-2 2026-04-17 07:39:24.238783 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer_fanout_17094020c5e345d882f1e31047fa2166 2026-04-17 07:39:24.275303 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer_fanout_24f3ae76481346be918965111f6f6c75 2026-04-17 07:39:24.325114 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer_fanout_41613344066444828f859ac217def4d8 2026-04-17 07:39:24.364655 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer_fanout_5d731c00521847df83a2e107755e0665 2026-04-17 07:39:24.401960 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer_fanout_6de7a46da95e4cf58eff9f521c447aee 2026-04-17 07:39:24.445140 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: producer_fanout_b8b22cc72ded4377a9881e50b4e9ea60 2026-04-17 07:39:24.488159 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-plugin 2026-04-17 07:39:24.542346 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-plugin.testbed-node-0 2026-04-17 07:39:24.589225 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-plugin.testbed-node-1 2026-04-17 07:39:24.633852 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-plugin.testbed-node-2 2026-04-17 07:39:24.680110 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-reports-plugin 2026-04-17 07:39:24.723316 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-reports-plugin.testbed-node-0 2026-04-17 07:39:24.759227 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-reports-plugin.testbed-node-1 2026-04-17 07:39:24.803176 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-reports-plugin.testbed-node-2 2026-04-17 07:39:24.847343 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-server-resource-versions 2026-04-17 07:39:24.895316 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-0 2026-04-17 07:39:24.933338 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-1 2026-04-17 07:39:24.988147 | orchestrator | 2026-04-17 07:39:24 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-2 2026-04-17 07:39:25.030503 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_16a47e6e144e4e9eb082064723acff2d 2026-04-17 07:39:25.061097 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_2c17d20d06534f34bd816be7286ccdc2 2026-04-17 07:39:25.089652 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_330fd5dafd4a4487996385036a8a6aeb 2026-04-17 07:39:25.119154 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_554e3ea47d424e0b94ec43865b5b40ad 2026-04-17 07:39:25.147491 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_6756d7581cab4b7f9073be9fa13ee6d1 2026-04-17 07:39:25.181084 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_6c20b09b10d64677a2cdcec5f39474eb 2026-04-17 07:39:25.215704 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_7483d828da734af9ae08d26012450022 2026-04-17 07:39:25.245957 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_8e316462ec234151b33b9b7522554dc2 2026-04-17 07:39:25.276484 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_e9c4f1c2e4fc44e1865160226d7f6199 2026-04-17 07:39:25.313078 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: reply_f1982abd987d400886ef1fa888ea0982 2026-04-17 07:39:25.348890 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: scheduler 2026-04-17 07:39:25.386225 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: scheduler.testbed-node-0 2026-04-17 07:39:25.444714 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: scheduler.testbed-node-1 2026-04-17 07:39:25.489173 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: scheduler.testbed-node-2 2026-04-17 07:39:25.530513 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker 2026-04-17 07:39:25.576118 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker.testbed-node-0 2026-04-17 07:39:25.630824 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker.testbed-node-1 2026-04-17 07:39:25.675422 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker.testbed-node-2 2026-04-17 07:39:25.718135 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker_fanout_1382a663cb5249509b4ad3c892ad603d 2026-04-17 07:39:25.758667 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker_fanout_29bd8282b07946fb8a2f2908cbedb782 2026-04-17 07:39:25.791119 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker_fanout_609708bc5dd440e2a6b56d99ae6a008f 2026-04-17 07:39:25.831710 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker_fanout_a3218af8539a4693bbf1ed32fbb1a049 2026-04-17 07:39:25.883708 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker_fanout_a5dab00bcf4c4061815203224ef4f7a1 2026-04-17 07:39:25.932309 | orchestrator | 2026-04-17 07:39:25 | INFO  | Deleted queue: worker_fanout_e33095a7518b4a86b127d642c475903e 2026-04-17 07:39:25.932387 | orchestrator | 2026-04-17 07:39:25 | INFO  | Successfully deleted 127 queue(s) in vhost '/' 2026-04-17 07:39:26.199406 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-17 07:39:32.615002 | orchestrator | 2026-04-17 07:39:32 | ERROR  | Unable to get ansible vault password 2026-04-17 07:39:32.615113 | orchestrator | 2026-04-17 07:39:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-17 07:39:32.615132 | orchestrator | 2026-04-17 07:39:32 | ERROR  | Dropping encrypted entries 2026-04-17 07:39:32.647879 | orchestrator | 2026-04-17 07:39:32 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-17 07:39:32.813493 | orchestrator | 2026-04-17 07:39:32 | INFO  | Found 13 classic queue(s) in vhost '/': 2026-04-17 07:39:32.813619 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-17 07:39:32.813670 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor.aync76lm2t54 (vhost: /, messages: 0) 2026-04-17 07:39:32.813693 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor.dzwct34rcrrw (vhost: /, messages: 0) 2026-04-17 07:39:32.813712 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor.s3ymcvio2asi (vhost: /, messages: 0) 2026-04-17 07:39:32.813731 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_11601765e7c442fd998593624dd5766b (vhost: /, messages: 0) 2026-04-17 07:39:32.813752 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_28d9c33e2e444996b3de7e9169a4ec20 (vhost: /, messages: 0) 2026-04-17 07:39:32.813770 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_333db82f226d4c8a9b2d64d5a6fefd7e (vhost: /, messages: 0) 2026-04-17 07:39:32.813819 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_55ff3626d6574cd4852f07888b0404ea (vhost: /, messages: 0) 2026-04-17 07:39:32.813839 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_884a2875b69e49c1b7c59df2bc594c6e (vhost: /, messages: 0) 2026-04-17 07:39:32.813858 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_90e197aecc46435a8911ea489b96df7c (vhost: /, messages: 0) 2026-04-17 07:39:32.813876 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_b4258b3a1e9944fea5e6e830a4ad9242 (vhost: /, messages: 0) 2026-04-17 07:39:32.813896 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_c75d781f355d4e3d967594d8b67236e4 (vhost: /, messages: 0) 2026-04-17 07:39:32.813915 | orchestrator | 2026-04-17 07:39:32 | INFO  |  - magnum-conductor_fanout_e5e7b4f6d9b64be4b73b5d38b4f4e3cb (vhost: /, messages: 0) 2026-04-17 07:39:33.094090 | orchestrator | + osism migrate rabbitmq3to4 list --vhost openstack --quorum 2026-04-17 07:39:39.350569 | orchestrator | 2026-04-17 07:39:39 | ERROR  | Unable to get ansible vault password 2026-04-17 07:39:39.350676 | orchestrator | 2026-04-17 07:39:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-17 07:39:39.350692 | orchestrator | 2026-04-17 07:39:39 | ERROR  | Dropping encrypted entries 2026-04-17 07:39:39.383772 | orchestrator | 2026-04-17 07:39:39 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-17 07:39:39.562387 | orchestrator | 2026-04-17 07:39:39 | INFO  | Found 192 quorum queue(s) in vhost 'openstack': 2026-04-17 07:39:39.562556 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - alarm.all.sample (vhost: openstack, messages: 0) 2026-04-17 07:39:39.562574 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - alarming.sample (vhost: openstack, messages: 0) 2026-04-17 07:39:39.562610 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - barbican.workers (vhost: openstack, messages: 0) 2026-04-17 07:39:39.562624 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - barbican.workers.barbican.queue (vhost: openstack, messages: 0) 2026-04-17 07:39:39.562638 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - barbican.workers_fanout_testbed-node-0:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.562737 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - barbican.workers_fanout_testbed-node-1:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.562753 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - barbican.workers_fanout_testbed-node-2:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.562765 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - barbican_notifications.info (vhost: openstack, messages: 0) 2026-04-17 07:39:39.563082 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central (vhost: openstack, messages: 0) 2026-04-17 07:39:39.563112 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.563130 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.563337 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.563357 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central_fanout_testbed-node-0:designate-central:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.563452 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central_fanout_testbed-node-0:designate-central:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.563699 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central_fanout_testbed-node-1:designate-central:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.563721 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central_fanout_testbed-node-1:designate-central:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564035 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central_fanout_testbed-node-2:designate-central:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564057 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - central_fanout_testbed-node-2:designate-central:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564069 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-backup (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564392 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-backup.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564413 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-backup.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564425 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-backup.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564453 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-backup_fanout_testbed-node-0:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564532 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-backup_fanout_testbed-node-1:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564713 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-backup_fanout_testbed-node-2:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564731 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-scheduler (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564807 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564823 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.564913 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.565163 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-scheduler_fanout_testbed-node-0:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.565183 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-scheduler_fanout_testbed-node-1:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.565389 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-scheduler_fanout_testbed-node-2:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.565409 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume (vhost: openstack, messages: 0) 2026-04-17 07:39:39.565692 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: openstack, messages: 0) 2026-04-17 07:39:39.565710 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.565986 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_testbed-node-0:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566076 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566094 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566110 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_testbed-node-1:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566371 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566401 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566418 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_testbed-node-2:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566844 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume_fanout_testbed-node-0:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566885 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume_fanout_testbed-node-1:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566903 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - cinder-volume_fanout_testbed-node-2:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566919 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - compute (vhost: openstack, messages: 0) 2026-04-17 07:39:39.566935 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - compute.testbed-node-3 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567199 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - compute.testbed-node-4 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567223 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - compute.testbed-node-5 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567398 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - compute_fanout_testbed-node-3:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567425 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - compute_fanout_testbed-node-4:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567562 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - compute_fanout_testbed-node-5:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567582 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567687 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567700 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567871 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.567986 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.568091 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.568163 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.568349 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.568423 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.568694 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.568709 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - event.sample (vhost: openstack, messages: 5) 2026-04-17 07:39:39.568717 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-data (vhost: openstack, messages: 0) 2026-04-17 07:39:39.568825 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-data.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.568838 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-data.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.569056 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-data.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.569071 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-data_fanout_testbed-node-0:manila-data:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.569212 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-data_fanout_testbed-node-1:manila-data:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.569457 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-data_fanout_testbed-node-2:manila-data:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.569646 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-scheduler (vhost: openstack, messages: 0) 2026-04-17 07:39:39.569659 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.569667 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570285 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570299 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-scheduler_fanout_testbed-node-0:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570314 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-scheduler_fanout_testbed-node-1:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570321 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-scheduler_fanout_testbed-node-2:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570457 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-share (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570469 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570477 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570494 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570506 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-share_fanout_testbed-node-0:manila-share:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570516 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-share_fanout_testbed-node-1:manila-share:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570584 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - manila-share_fanout_testbed-node-2:manila-share:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570740 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - notifications.audit (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570946 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - notifications.critical (vhost: openstack, messages: 0) 2026-04-17 07:39:39.570968 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - notifications.debug (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571079 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - notifications.error (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571097 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - notifications.info (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571349 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - notifications.sample (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571373 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - notifications.warn (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571381 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - octavia_provisioning_v2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571511 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571523 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571666 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571782 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-0:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571869 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-1:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.571883 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-2:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.572935 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - osism-listener-cinder (vhost: openstack, messages: 0) 2026-04-17 07:39:39.572950 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - osism-listener-glance (vhost: openstack, messages: 0) 2026-04-17 07:39:39.572956 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - osism-listener-ironic (vhost: openstack, messages: 0) 2026-04-17 07:39:39.572963 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - osism-listener-keystone (vhost: openstack, messages: 0) 2026-04-17 07:39:39.572970 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - osism-listener-neutron (vhost: openstack, messages: 0) 2026-04-17 07:39:39.572976 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - osism-listener-nova (vhost: openstack, messages: 0) 2026-04-17 07:39:39.572983 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573001 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573007 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573014 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573028 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573035 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573041 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573092 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573102 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573499 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573511 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573518 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573534 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573542 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573686 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573699 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573705 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.573788 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575387 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575414 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575422 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575428 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575436 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575443 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575450 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575456 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575463 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575470 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575477 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575491 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575498 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575505 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575512 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575519 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575526 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575533 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575549 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575556 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575563 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575576 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575583 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575589 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575596 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575657 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575666 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.575673 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576348 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576477 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576507 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576527 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576545 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576571 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576583 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576594 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576771 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576793 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576804 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576815 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576826 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-0:designate-manage:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.576989 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-0:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577007 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-0:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577031 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-1:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577042 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-1:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577474 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-2:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577526 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-2:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577537 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-3:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577549 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-4:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577652 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - reply_testbed-node-5:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577668 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577680 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577832 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.577851 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578222 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578297 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578304 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578363 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578373 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578462 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578475 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578699 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578713 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578720 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578773 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.578904 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.579051 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.579106 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.579162 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.579228 | orchestrator | 2026-04-17 07:39:39 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-17 07:39:39.824889 | orchestrator | + osism migrate rabbitmq3to4 delete-exchanges 2026-04-17 07:39:46.154079 | orchestrator | 2026-04-17 07:39:46 | ERROR  | Unable to get ansible vault password 2026-04-17 07:39:46.154190 | orchestrator | 2026-04-17 07:39:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-17 07:39:46.154206 | orchestrator | 2026-04-17 07:39:46 | ERROR  | Dropping encrypted entries 2026-04-17 07:39:46.187805 | orchestrator | 2026-04-17 07:39:46 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-17 07:39:46.210820 | orchestrator | 2026-04-17 07:39:46 | INFO  | Found 27 exchange(s) in vhost '/' 2026-04-17 07:39:46.246346 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: aodh 2026-04-17 07:39:46.276128 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: ceilometer 2026-04-17 07:39:46.318369 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: cinder 2026-04-17 07:39:46.364193 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: designate 2026-04-17 07:39:46.407964 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: dns 2026-04-17 07:39:46.453787 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: glance 2026-04-17 07:39:46.497887 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: heat 2026-04-17 07:39:46.551655 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: ironic 2026-04-17 07:39:46.597966 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: keystone 2026-04-17 07:39:46.636639 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: l3_agent_fanout 2026-04-17 07:39:46.696876 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: magnum 2026-04-17 07:39:46.754048 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: magnum-conductor_fanout 2026-04-17 07:39:46.806746 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: neutron 2026-04-17 07:39:46.845486 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: neutron-vo-Network-1.1_fanout 2026-04-17 07:39:46.877600 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: neutron-vo-Port-1.10_fanout 2026-04-17 07:39:46.919363 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: neutron-vo-SecurityGroup-1.6_fanout 2026-04-17 07:39:46.953099 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: neutron-vo-SecurityGroupRule-1.3_fanout 2026-04-17 07:39:46.985284 | orchestrator | 2026-04-17 07:39:46 | INFO  | Deleted exchange: neutron-vo-Subnet-1.2_fanout 2026-04-17 07:39:47.024526 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: nova 2026-04-17 07:39:47.061374 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: octavia 2026-04-17 07:39:47.103031 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: openstack 2026-04-17 07:39:47.137838 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: q-agent-notifier-port-update_fanout 2026-04-17 07:39:47.177663 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: q-agent-notifier-security_group-update_fanout 2026-04-17 07:39:47.209288 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: scheduler_fanout 2026-04-17 07:39:47.245047 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: swift 2026-04-17 07:39:47.290835 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: trove 2026-04-17 07:39:47.324951 | orchestrator | 2026-04-17 07:39:47 | INFO  | Deleted exchange: zaqar 2026-04-17 07:39:47.325127 | orchestrator | 2026-04-17 07:39:47 | INFO  | Successfully deleted 27 exchange(s) in vhost '/' 2026-04-17 07:39:47.593338 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-17 07:39:54.030763 | orchestrator | 2026-04-17 07:39:54 | ERROR  | Unable to get ansible vault password 2026-04-17 07:39:54.030871 | orchestrator | 2026-04-17 07:39:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-17 07:39:54.030888 | orchestrator | 2026-04-17 07:39:54 | ERROR  | Dropping encrypted entries 2026-04-17 07:39:54.064602 | orchestrator | 2026-04-17 07:39:54 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-17 07:39:54.076451 | orchestrator | 2026-04-17 07:39:54 | INFO  | No exchanges found in vhost '/' 2026-04-17 07:39:54.324398 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-17 07:39:54.324503 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/400-monitoring.sh 2026-04-17 07:39:55.776450 | orchestrator | 2026-04-17 07:39:55 | INFO  | Prepare task for execution of prometheus. 2026-04-17 07:39:55.842493 | orchestrator | 2026-04-17 07:39:55 | INFO  | Task 0ce260c5-69cb-4ef0-b64d-4bdfdca3155d (prometheus) was prepared for execution. 2026-04-17 07:39:55.842593 | orchestrator | 2026-04-17 07:39:55 | INFO  | It takes a moment until task 0ce260c5-69cb-4ef0-b64d-4bdfdca3155d (prometheus) has been started and output is visible here. 2026-04-17 07:40:14.071821 | orchestrator | 2026-04-17 07:40:14.071914 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:40:14.071926 | orchestrator | 2026-04-17 07:40:14.071934 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:40:14.071954 | orchestrator | Friday 17 April 2026 07:40:00 +0000 (0:00:01.590) 0:00:01.590 ********** 2026-04-17 07:40:14.071973 | orchestrator | ok: [testbed-manager] 2026-04-17 07:40:14.071981 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:40:14.071988 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:40:14.071995 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:40:14.072001 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:40:14.072008 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:40:14.072015 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:40:14.072021 | orchestrator | 2026-04-17 07:40:14.072028 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:40:14.072035 | orchestrator | Friday 17 April 2026 07:40:03 +0000 (0:00:02.807) 0:00:04.397 ********** 2026-04-17 07:40:14.072042 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-17 07:40:14.072049 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-17 07:40:14.072056 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-17 07:40:14.072062 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-17 07:40:14.072069 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-17 07:40:14.072075 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-17 07:40:14.072082 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-17 07:40:14.072089 | orchestrator | 2026-04-17 07:40:14.072095 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-17 07:40:14.072119 | orchestrator | 2026-04-17 07:40:14.072126 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-17 07:40:14.072132 | orchestrator | Friday 17 April 2026 07:40:06 +0000 (0:00:03.074) 0:00:07.472 ********** 2026-04-17 07:40:14.072139 | orchestrator | included: /ansible/roles/prometheus/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 07:40:14.072148 | orchestrator | 2026-04-17 07:40:14.072154 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-17 07:40:14.072161 | orchestrator | Friday 17 April 2026 07:40:10 +0000 (0:00:04.087) 0:00:11.560 ********** 2026-04-17 07:40:14.072170 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:14.072182 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-17 07:40:14.072190 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:14.072215 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:14.072276 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:14.072285 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:14.072298 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:14.072305 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:14.072312 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:14.072319 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:14.072326 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:14.072341 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:14.825467 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:14.825612 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825624 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:14.825636 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825648 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825723 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825739 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:40:14.825754 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825765 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825778 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:14.825801 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:22.279641 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:22.279784 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:22.279813 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:22.279835 | orchestrator | 2026-04-17 07:40:22.279857 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-17 07:40:22.279877 | orchestrator | Friday 17 April 2026 07:40:16 +0000 (0:00:05.823) 0:00:17.383 ********** 2026-04-17 07:40:22.279896 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 07:40:22.279909 | orchestrator | 2026-04-17 07:40:22.279921 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-17 07:40:22.279932 | orchestrator | Friday 17 April 2026 07:40:19 +0000 (0:00:02.724) 0:00:20.108 ********** 2026-04-17 07:40:22.279947 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-17 07:40:22.279961 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:22.280033 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:22.280047 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:22.280059 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:22.280070 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:22.280081 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:22.280093 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:22.280105 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:22.280117 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:22.280149 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.151924 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:24.152033 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152049 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:24.152063 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152075 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152087 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:24.152140 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152174 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:24.152190 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:40:24.152205 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152251 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152263 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152283 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152300 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:24.152322 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:27.936482 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:27.936599 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:27.936616 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:27.936628 | orchestrator | 2026-04-17 07:40:27.936642 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-17 07:40:27.936654 | orchestrator | Friday 17 April 2026 07:40:26 +0000 (0:00:06.981) 0:00:27.089 ********** 2026-04-17 07:40:27.936670 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-17 07:40:27.936727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:27.936741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:27.936773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:27.936786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:27.936798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:27.936809 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:27.936829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:27.936840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:27.936870 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:27.936890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:28.525880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:28.525986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:28.526002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:28.526097 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:40:28.526115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:28.526127 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:40:28.526138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:28.526169 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:40:28.526206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:28.526296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:28.526319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:28.526351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:40:28.526370 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:40:28.526383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:28.526395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:40:28.526408 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:40:28.526428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:28.526452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:31.282560 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:40:31.282663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:31.282684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:40:31.282722 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:40:31.282743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:31.282764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:31.282782 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:40:31.282801 | orchestrator | 2026-04-17 07:40:31.282820 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-17 07:40:31.282838 | orchestrator | Friday 17 April 2026 07:40:30 +0000 (0:00:03.666) 0:00:30.755 ********** 2026-04-17 07:40:31.282879 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-17 07:40:31.282929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:31.282952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:31.282964 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:31.282993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:31.283004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:31.283015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:31.283032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:31.283044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:31.283064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:32.527174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:32.527296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:32.527308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:32.527317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:32.527326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:32.527347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:32.527356 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:40:32.527365 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:40:32.527389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:40:32.527406 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:32.527415 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:40:32.527423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:32.527431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:32.527439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:32.527451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:32.527459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:32.527472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:40:37.561372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:40:37.561483 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:40:37.561501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:40:37.561515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:40:37.561527 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:40:37.561539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:40:37.561550 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:40:37.561579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:40:37.561590 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:40:37.561601 | orchestrator | 2026-04-17 07:40:37.561613 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-17 07:40:37.561625 | orchestrator | Friday 17 April 2026 07:40:34 +0000 (0:00:04.348) 0:00:35.104 ********** 2026-04-17 07:40:37.561657 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-17 07:40:37.561696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:37.561709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:37.561721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:37.561732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:37.561749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:37.561760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:37.561780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:40:37.561800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:39.644839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:39.644950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:39.644967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:39.644979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:39.645009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:39.645044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:39.645056 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:39.645086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:39.645099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:39.645110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:40:39.645121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:39.645133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:40:39.645152 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:40:39.645173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:40:39.645192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:41:12.631312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:41:12.631446 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:41:12.631474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:41:12.631514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:41:12.631562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:41:12.631574 | orchestrator | 2026-04-17 07:41:12.631586 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-17 07:41:12.631598 | orchestrator | Friday 17 April 2026 07:40:42 +0000 (0:00:07.861) 0:00:42.965 ********** 2026-04-17 07:41:12.631607 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 07:41:12.631619 | orchestrator | 2026-04-17 07:41:12.631629 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-17 07:41:12.631638 | orchestrator | Friday 17 April 2026 07:40:44 +0000 (0:00:02.285) 0:00:45.251 ********** 2026-04-17 07:41:12.631647 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:41:12.631657 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:12.631667 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:12.631676 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:12.631686 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:12.631695 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:12.631705 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:12.631714 | orchestrator | 2026-04-17 07:41:12.631724 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-17 07:41:12.631733 | orchestrator | Friday 17 April 2026 07:40:46 +0000 (0:00:02.080) 0:00:47.331 ********** 2026-04-17 07:41:12.631743 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 07:41:12.631752 | orchestrator | 2026-04-17 07:41:12.631762 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-17 07:41:12.631772 | orchestrator | Friday 17 April 2026 07:40:48 +0000 (0:00:01.777) 0:00:49.109 ********** 2026-04-17 07:41:12.631781 | orchestrator | [WARNING]: Skipped 2026-04-17 07:41:12.631794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.631806 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-17 07:41:12.631816 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.631828 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-17 07:41:12.631855 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 07:41:12.631867 | orchestrator | [WARNING]: Skipped 2026-04-17 07:41:12.631878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.631889 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-17 07:41:12.631900 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.631912 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-17 07:41:12.631923 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 07:41:12.631933 | orchestrator | [WARNING]: Skipped 2026-04-17 07:41:12.631945 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.631956 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-17 07:41:12.631967 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.631978 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-17 07:41:12.632001 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:41:12.632012 | orchestrator | [WARNING]: Skipped 2026-04-17 07:41:12.632023 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.632033 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-17 07:41:12.632044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.632055 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-17 07:41:12.632065 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 07:41:12.632076 | orchestrator | [WARNING]: Skipped 2026-04-17 07:41:12.632087 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.632099 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-17 07:41:12.632110 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.632121 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-17 07:41:12.632132 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 07:41:12.632143 | orchestrator | [WARNING]: Skipped 2026-04-17 07:41:12.632152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.632162 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-17 07:41:12.632171 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.632181 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-17 07:41:12.632219 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 07:41:12.632231 | orchestrator | [WARNING]: Skipped 2026-04-17 07:41:12.632241 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.632255 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-17 07:41:12.632266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 07:41:12.632275 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-17 07:41:12.632285 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 07:41:12.632295 | orchestrator | 2026-04-17 07:41:12.632304 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-17 07:41:12.632314 | orchestrator | Friday 17 April 2026 07:40:51 +0000 (0:00:03.175) 0:00:52.284 ********** 2026-04-17 07:41:12.632323 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 07:41:12.632333 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:12.632344 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 07:41:12.632354 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:12.632364 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 07:41:12.632374 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:12.632383 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 07:41:12.632393 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:12.632403 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 07:41:12.632412 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:12.632422 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 07:41:12.632431 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:12.632441 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-17 07:41:12.632451 | orchestrator | 2026-04-17 07:41:12.632461 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-17 07:41:12.632470 | orchestrator | Friday 17 April 2026 07:41:10 +0000 (0:00:19.393) 0:01:11.678 ********** 2026-04-17 07:41:12.632486 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 07:41:12.632496 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:12.632506 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 07:41:12.632515 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:12.632525 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 07:41:12.632534 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 07:41:12.632544 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:12.632554 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:12.632570 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 07:41:53.857457 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.857600 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 07:41:53.857619 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.857632 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-17 07:41:53.857644 | orchestrator | 2026-04-17 07:41:53.857656 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-17 07:41:53.857667 | orchestrator | Friday 17 April 2026 07:41:15 +0000 (0:00:04.742) 0:01:16.421 ********** 2026-04-17 07:41:53.857679 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 07:41:53.857691 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.857702 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 07:41:53.857713 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.857724 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 07:41:53.857735 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.857746 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 07:41:53.857757 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.857768 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-17 07:41:53.857779 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 07:41:53.857790 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.857801 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 07:41:53.857812 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.857822 | orchestrator | 2026-04-17 07:41:53.857834 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-17 07:41:53.857845 | orchestrator | Friday 17 April 2026 07:41:18 +0000 (0:00:02.896) 0:01:19.317 ********** 2026-04-17 07:41:53.857855 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 07:41:53.857866 | orchestrator | 2026-04-17 07:41:53.857893 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-17 07:41:53.857905 | orchestrator | Friday 17 April 2026 07:41:20 +0000 (0:00:01.734) 0:01:21.052 ********** 2026-04-17 07:41:53.857916 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:41:53.857926 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.857938 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.857949 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.857960 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.857993 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.858007 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.858066 | orchestrator | 2026-04-17 07:41:53.858080 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-17 07:41:53.858093 | orchestrator | Friday 17 April 2026 07:41:22 +0000 (0:00:01.956) 0:01:23.008 ********** 2026-04-17 07:41:53.858105 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:41:53.858117 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.858130 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.858142 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.858155 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:41:53.858168 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:41:53.858202 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:41:53.858215 | orchestrator | 2026-04-17 07:41:53.858228 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-17 07:41:53.858240 | orchestrator | Friday 17 April 2026 07:41:26 +0000 (0:00:03.802) 0:01:26.811 ********** 2026-04-17 07:41:53.858252 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 07:41:53.858265 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:41:53.858278 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 07:41:53.858291 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 07:41:53.858303 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.858316 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 07:41:53.858328 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.858340 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.858353 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 07:41:53.858366 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.858377 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 07:41:53.858388 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.858399 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 07:41:53.858409 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.858421 | orchestrator | 2026-04-17 07:41:53.858432 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-17 07:41:53.858443 | orchestrator | Friday 17 April 2026 07:41:28 +0000 (0:00:02.722) 0:01:29.534 ********** 2026-04-17 07:41:53.858471 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 07:41:53.858483 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 07:41:53.858494 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.858505 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.858516 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 07:41:53.858526 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.858537 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-17 07:41:53.858548 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 07:41:53.858559 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.858570 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 07:41:53.858581 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.858592 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 07:41:53.858611 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.858622 | orchestrator | 2026-04-17 07:41:53.858632 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-17 07:41:53.858643 | orchestrator | Friday 17 April 2026 07:41:31 +0000 (0:00:02.965) 0:01:32.500 ********** 2026-04-17 07:41:53.858654 | orchestrator | [WARNING]: Skipped 2026-04-17 07:41:53.858665 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-17 07:41:53.858676 | orchestrator | due to this access issue: 2026-04-17 07:41:53.858687 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-17 07:41:53.858697 | orchestrator | not a directory 2026-04-17 07:41:53.858708 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 07:41:53.858719 | orchestrator | 2026-04-17 07:41:53.858730 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-17 07:41:53.858741 | orchestrator | Friday 17 April 2026 07:41:34 +0000 (0:00:02.430) 0:01:34.931 ********** 2026-04-17 07:41:53.858752 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:41:53.858762 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.858773 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.858784 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.858794 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.858805 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.858816 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.858826 | orchestrator | 2026-04-17 07:41:53.858843 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-17 07:41:53.858854 | orchestrator | Friday 17 April 2026 07:41:36 +0000 (0:00:02.011) 0:01:36.942 ********** 2026-04-17 07:41:53.858865 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:41:53.858876 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.858887 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.858897 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.858908 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.858919 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.858929 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.858940 | orchestrator | 2026-04-17 07:41:53.858951 | orchestrator | TASK [prometheus : Check for the existence of Prometheus v2 container volume] *** 2026-04-17 07:41:53.858962 | orchestrator | Friday 17 April 2026 07:41:38 +0000 (0:00:02.496) 0:01:39.438 ********** 2026-04-17 07:41:53.858972 | orchestrator | ok: [testbed-manager] 2026-04-17 07:41:53.858983 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:41:53.858994 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:41:53.859005 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:41:53.859016 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:41:53.859026 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:41:53.859037 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:41:53.859048 | orchestrator | 2026-04-17 07:41:53.859059 | orchestrator | TASK [prometheus : Gracefully stop Prometheus] ********************************* 2026-04-17 07:41:53.859069 | orchestrator | Friday 17 April 2026 07:41:41 +0000 (0:00:02.404) 0:01:41.843 ********** 2026-04-17 07:41:53.859080 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.859091 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.859102 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.859113 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.859123 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.859134 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.859144 | orchestrator | changed: [testbed-manager] 2026-04-17 07:41:53.859155 | orchestrator | 2026-04-17 07:41:53.859166 | orchestrator | TASK [prometheus : Create new Prometheus v3 volume] **************************** 2026-04-17 07:41:53.859196 | orchestrator | Friday 17 April 2026 07:41:49 +0000 (0:00:08.184) 0:01:50.028 ********** 2026-04-17 07:41:53.859207 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.859217 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.859228 | orchestrator | changed: [testbed-manager] 2026-04-17 07:41:53.859246 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.859257 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.859268 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.859279 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.859317 | orchestrator | 2026-04-17 07:41:53.859375 | orchestrator | TASK [prometheus : Move _data from old to new volume] ************************** 2026-04-17 07:41:53.859393 | orchestrator | Friday 17 April 2026 07:41:51 +0000 (0:00:02.343) 0:01:52.371 ********** 2026-04-17 07:41:53.859409 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:53.859425 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:53.859441 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:53.859458 | orchestrator | changed: [testbed-manager] 2026-04-17 07:41:53.859475 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:53.859492 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:53.859509 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:53.859526 | orchestrator | 2026-04-17 07:41:53.859542 | orchestrator | TASK [prometheus : Remove old Prometheus v2 volume] **************************** 2026-04-17 07:41:53.859572 | orchestrator | Friday 17 April 2026 07:41:53 +0000 (0:00:02.211) 0:01:54.583 ********** 2026-04-17 07:41:59.146637 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:41:59.146781 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:41:59.146805 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:41:59.146823 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:41:59.146840 | orchestrator | changed: [testbed-manager] 2026-04-17 07:41:59.146899 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:41:59.146911 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:41:59.146922 | orchestrator | 2026-04-17 07:41:59.146932 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-17 07:41:59.146943 | orchestrator | Friday 17 April 2026 07:41:56 +0000 (0:00:02.445) 0:01:57.029 ********** 2026-04-17 07:41:59.146958 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-17 07:41:59.146990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:41:59.147003 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:41:59.147033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:41:59.147043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:41:59.147071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:41:59.147082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:41:59.147092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 07:41:59.147103 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:41:59.147118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:41:59.147135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:41:59.147145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:41:59.147155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:41:59.147198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:42:01.733456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:42:01.733569 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:42:01.733587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:42:01.733617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:42:01.733628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 07:42:01.733638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:42:01.733665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:42:01.733675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:42:01.733686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:42:01.733701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:42:01.733719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:42:01.733730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 07:42:01.733740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:42:01.733775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:42:01.733794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 07:42:05.784345 | orchestrator | 2026-04-17 07:42:05.784453 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-17 07:42:05.784471 | orchestrator | Friday 17 April 2026 07:42:02 +0000 (0:00:06.574) 0:02:03.604 ********** 2026-04-17 07:42:05.784484 | orchestrator | changed: [testbed-manager] => { 2026-04-17 07:42:05.784496 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:42:05.784508 | orchestrator | } 2026-04-17 07:42:05.784519 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:42:05.784530 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:42:05.784541 | orchestrator | } 2026-04-17 07:42:05.784553 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:42:05.784564 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:42:05.784574 | orchestrator | } 2026-04-17 07:42:05.784585 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:42:05.784596 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:42:05.784631 | orchestrator | } 2026-04-17 07:42:05.784642 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 07:42:05.784653 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:42:05.784663 | orchestrator | } 2026-04-17 07:42:05.784674 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 07:42:05.784685 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:42:05.784695 | orchestrator | } 2026-04-17 07:42:05.784706 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 07:42:05.784717 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:42:05.784727 | orchestrator | } 2026-04-17 07:42:05.784738 | orchestrator | 2026-04-17 07:42:05.784750 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:42:05.784761 | orchestrator | Friday 17 April 2026 07:42:04 +0000 (0:00:02.078) 0:02:05.682 ********** 2026-04-17 07:42:05.784791 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-17 07:42:05.784808 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:42:05.784821 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:42:05.784853 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:42:05.784873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:05.784888 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:42:05.784906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:42:05.784920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:05.784934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:05.784948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:42:05.784966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:05.784984 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:42:05.785014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:42:06.366741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:06.366846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:06.366880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:42:06.366893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:06.366906 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:42:06.366921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:42:06.366933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:42:06.366945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:42:06.366977 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:42:06.367010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:42:06.367023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:42:06.367040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:42:06.367051 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:42:06.367063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:42:06.367075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:06.367086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:42:06.367098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:42:06.367124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 07:44:28.842315 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:44:28.842437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 07:44:28.842534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 07:44:28.842552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 07:44:28.842565 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:44:28.842577 | orchestrator | 2026-04-17 07:44:28.842589 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 07:44:28.842601 | orchestrator | Friday 17 April 2026 07:42:07 +0000 (0:00:03.071) 0:02:08.753 ********** 2026-04-17 07:44:28.842612 | orchestrator | 2026-04-17 07:44:28.842623 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 07:44:28.842634 | orchestrator | Friday 17 April 2026 07:42:08 +0000 (0:00:00.487) 0:02:09.241 ********** 2026-04-17 07:44:28.842645 | orchestrator | 2026-04-17 07:44:28.842656 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 07:44:28.842666 | orchestrator | Friday 17 April 2026 07:42:08 +0000 (0:00:00.484) 0:02:09.726 ********** 2026-04-17 07:44:28.842677 | orchestrator | 2026-04-17 07:44:28.842688 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 07:44:28.842698 | orchestrator | Friday 17 April 2026 07:42:09 +0000 (0:00:00.444) 0:02:10.171 ********** 2026-04-17 07:44:28.842709 | orchestrator | 2026-04-17 07:44:28.842720 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 07:44:28.842731 | orchestrator | Friday 17 April 2026 07:42:09 +0000 (0:00:00.445) 0:02:10.617 ********** 2026-04-17 07:44:28.842742 | orchestrator | 2026-04-17 07:44:28.842752 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 07:44:28.842763 | orchestrator | Friday 17 April 2026 07:42:10 +0000 (0:00:00.453) 0:02:11.070 ********** 2026-04-17 07:44:28.842797 | orchestrator | 2026-04-17 07:44:28.842809 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 07:44:28.842820 | orchestrator | Friday 17 April 2026 07:42:11 +0000 (0:00:00.819) 0:02:11.890 ********** 2026-04-17 07:44:28.842830 | orchestrator | 2026-04-17 07:44:28.842841 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-17 07:44:28.842854 | orchestrator | Friday 17 April 2026 07:42:11 +0000 (0:00:00.829) 0:02:12.720 ********** 2026-04-17 07:44:28.842866 | orchestrator | changed: [testbed-manager] 2026-04-17 07:44:28.842878 | orchestrator | 2026-04-17 07:44:28.842891 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-17 07:44:28.842903 | orchestrator | Friday 17 April 2026 07:42:32 +0000 (0:00:20.194) 0:02:32.914 ********** 2026-04-17 07:44:28.842915 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:44:28.842927 | orchestrator | changed: [testbed-node-3] 2026-04-17 07:44:28.842940 | orchestrator | changed: [testbed-manager] 2026-04-17 07:44:28.842953 | orchestrator | changed: [testbed-node-4] 2026-04-17 07:44:28.842966 | orchestrator | changed: [testbed-node-5] 2026-04-17 07:44:28.842978 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:44:28.842990 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:44:28.843002 | orchestrator | 2026-04-17 07:44:28.843015 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-17 07:44:28.843027 | orchestrator | Friday 17 April 2026 07:42:52 +0000 (0:00:19.906) 0:02:52.821 ********** 2026-04-17 07:44:28.843039 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:44:28.843052 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:44:28.843064 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:44:28.843076 | orchestrator | 2026-04-17 07:44:28.843088 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-17 07:44:28.843101 | orchestrator | Friday 17 April 2026 07:43:05 +0000 (0:00:13.170) 0:03:05.991 ********** 2026-04-17 07:44:28.843114 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:44:28.843126 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:44:28.843138 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:44:28.843150 | orchestrator | 2026-04-17 07:44:28.843180 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-17 07:44:28.843194 | orchestrator | Friday 17 April 2026 07:43:18 +0000 (0:00:12.905) 0:03:18.897 ********** 2026-04-17 07:44:28.843206 | orchestrator | changed: [testbed-node-4] 2026-04-17 07:44:28.843218 | orchestrator | changed: [testbed-manager] 2026-04-17 07:44:28.843230 | orchestrator | changed: [testbed-node-3] 2026-04-17 07:44:28.843242 | orchestrator | changed: [testbed-node-5] 2026-04-17 07:44:28.843252 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:44:28.843263 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:44:28.843274 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:44:28.843284 | orchestrator | 2026-04-17 07:44:28.843295 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-17 07:44:28.843306 | orchestrator | Friday 17 April 2026 07:43:35 +0000 (0:00:17.124) 0:03:36.022 ********** 2026-04-17 07:44:28.843316 | orchestrator | changed: [testbed-manager] 2026-04-17 07:44:28.843327 | orchestrator | 2026-04-17 07:44:28.843338 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-17 07:44:28.843349 | orchestrator | Friday 17 April 2026 07:43:50 +0000 (0:00:15.018) 0:03:51.040 ********** 2026-04-17 07:44:28.843360 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:44:28.843371 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:44:28.843381 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:44:28.843392 | orchestrator | 2026-04-17 07:44:28.843403 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-17 07:44:28.843414 | orchestrator | Friday 17 April 2026 07:44:03 +0000 (0:00:13.028) 0:04:04.069 ********** 2026-04-17 07:44:28.843430 | orchestrator | changed: [testbed-manager] 2026-04-17 07:44:28.843441 | orchestrator | 2026-04-17 07:44:28.843460 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-17 07:44:28.843491 | orchestrator | Friday 17 April 2026 07:44:15 +0000 (0:00:12.411) 0:04:16.481 ********** 2026-04-17 07:44:28.843502 | orchestrator | changed: [testbed-node-4] 2026-04-17 07:44:28.843513 | orchestrator | changed: [testbed-node-3] 2026-04-17 07:44:28.843524 | orchestrator | changed: [testbed-node-5] 2026-04-17 07:44:28.843534 | orchestrator | 2026-04-17 07:44:28.843545 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:44:28.843557 | orchestrator | testbed-manager : ok=28  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 07:44:28.843569 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-17 07:44:28.843580 | orchestrator | testbed-node-1 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-17 07:44:28.843590 | orchestrator | testbed-node-2 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-17 07:44:28.843601 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 07:44:28.843612 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 07:44:28.843623 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 07:44:28.843633 | orchestrator | 2026-04-17 07:44:28.843644 | orchestrator | 2026-04-17 07:44:28.843655 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:44:28.843666 | orchestrator | Friday 17 April 2026 07:44:28 +0000 (0:00:13.090) 0:04:29.571 ********** 2026-04-17 07:44:28.843676 | orchestrator | =============================================================================== 2026-04-17 07:44:28.843687 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.19s 2026-04-17 07:44:28.843698 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.91s 2026-04-17 07:44:28.843709 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.40s 2026-04-17 07:44:28.843719 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.12s 2026-04-17 07:44:28.843730 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.02s 2026-04-17 07:44:28.843740 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.17s 2026-04-17 07:44:28.843751 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.09s 2026-04-17 07:44:28.843761 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.03s 2026-04-17 07:44:28.843772 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.91s 2026-04-17 07:44:28.843783 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 12.41s 2026-04-17 07:44:28.843793 | orchestrator | prometheus : Gracefully stop Prometheus --------------------------------- 8.18s 2026-04-17 07:44:28.843804 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.86s 2026-04-17 07:44:28.843814 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.98s 2026-04-17 07:44:28.843825 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 6.58s 2026-04-17 07:44:28.843835 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.82s 2026-04-17 07:44:28.843846 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.74s 2026-04-17 07:44:28.843864 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.35s 2026-04-17 07:44:29.286325 | orchestrator | prometheus : include_tasks ---------------------------------------------- 4.09s 2026-04-17 07:44:29.286442 | orchestrator | prometheus : Flush handlers --------------------------------------------- 3.97s 2026-04-17 07:44:29.286464 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.80s 2026-04-17 07:44:30.912043 | orchestrator | 2026-04-17 07:44:30 | INFO  | Prepare task for execution of grafana. 2026-04-17 07:44:30.978666 | orchestrator | 2026-04-17 07:44:30 | INFO  | Task 299077b8-5408-4429-8456-6c963017d8f0 (grafana) was prepared for execution. 2026-04-17 07:44:30.978758 | orchestrator | 2026-04-17 07:44:30 | INFO  | It takes a moment until task 299077b8-5408-4429-8456-6c963017d8f0 (grafana) has been started and output is visible here. 2026-04-17 07:44:54.089440 | orchestrator | 2026-04-17 07:44:54.089603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:44:54.089620 | orchestrator | 2026-04-17 07:44:54.089632 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:44:54.089643 | orchestrator | Friday 17 April 2026 07:44:35 +0000 (0:00:01.635) 0:00:01.635 ********** 2026-04-17 07:44:54.089655 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:44:54.089673 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:44:54.089711 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:44:54.089731 | orchestrator | 2026-04-17 07:44:54.089751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:44:54.089771 | orchestrator | Friday 17 April 2026 07:44:37 +0000 (0:00:01.690) 0:00:03.326 ********** 2026-04-17 07:44:54.089790 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-17 07:44:54.089809 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-17 07:44:54.089828 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-17 07:44:54.089839 | orchestrator | 2026-04-17 07:44:54.089850 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-17 07:44:54.089861 | orchestrator | 2026-04-17 07:44:54.089872 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-17 07:44:54.089883 | orchestrator | Friday 17 April 2026 07:44:39 +0000 (0:00:01.562) 0:00:04.888 ********** 2026-04-17 07:44:54.089894 | orchestrator | included: /ansible/roles/grafana/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:44:54.089906 | orchestrator | 2026-04-17 07:44:54.089917 | orchestrator | TASK [grafana : Checking if Grafana container needs upgrading] ***************** 2026-04-17 07:44:54.089928 | orchestrator | Friday 17 April 2026 07:44:42 +0000 (0:00:03.272) 0:00:08.161 ********** 2026-04-17 07:44:54.089938 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:44:54.089949 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:44:54.089959 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:44:54.089970 | orchestrator | 2026-04-17 07:44:54.089983 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-17 07:44:54.089996 | orchestrator | Friday 17 April 2026 07:44:45 +0000 (0:00:03.178) 0:00:11.339 ********** 2026-04-17 07:44:54.090013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:44:54.090100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:44:54.090142 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:44:54.090156 | orchestrator | 2026-04-17 07:44:54.090169 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-17 07:44:54.090181 | orchestrator | Friday 17 April 2026 07:44:47 +0000 (0:00:01.710) 0:00:13.050 ********** 2026-04-17 07:44:54.090194 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:44:54.090208 | orchestrator | 2026-04-17 07:44:54.090220 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-17 07:44:54.090252 | orchestrator | Friday 17 April 2026 07:44:49 +0000 (0:00:02.258) 0:00:15.309 ********** 2026-04-17 07:44:54.090265 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:44:54.090278 | orchestrator | 2026-04-17 07:44:54.090290 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-17 07:44:54.090308 | orchestrator | Friday 17 April 2026 07:44:51 +0000 (0:00:01.956) 0:00:17.265 ********** 2026-04-17 07:44:54.090322 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:44:54.090336 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:44:54.090348 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:44:54.090367 | orchestrator | 2026-04-17 07:44:54.090378 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-17 07:44:54.090389 | orchestrator | Friday 17 April 2026 07:44:53 +0000 (0:00:02.262) 0:00:19.528 ********** 2026-04-17 07:44:54.090400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:44:54.090411 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:44:54.090432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:45:00.931229 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:45:00.931333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:45:00.931349 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:45:00.931359 | orchestrator | 2026-04-17 07:45:00.931369 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-17 07:45:00.931378 | orchestrator | Friday 17 April 2026 07:44:55 +0000 (0:00:01.549) 0:00:21.077 ********** 2026-04-17 07:45:00.931388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:45:00.931416 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:45:00.931426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:45:00.931435 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:45:00.931443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:45:00.931452 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:45:00.931539 | orchestrator | 2026-04-17 07:45:00.931549 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-17 07:45:00.931559 | orchestrator | Friday 17 April 2026 07:44:57 +0000 (0:00:01.863) 0:00:22.941 ********** 2026-04-17 07:45:00.931591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:00.931602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:00.931619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:00.931629 | orchestrator | 2026-04-17 07:45:00.931637 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-17 07:45:00.931646 | orchestrator | Friday 17 April 2026 07:44:59 +0000 (0:00:02.290) 0:00:25.231 ********** 2026-04-17 07:45:00.931655 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:00.931665 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:00.931686 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:27.283891 | orchestrator | 2026-04-17 07:45:27.283985 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-17 07:45:27.283998 | orchestrator | Friday 17 April 2026 07:45:01 +0000 (0:00:02.488) 0:00:27.719 ********** 2026-04-17 07:45:27.284007 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:45:27.284016 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:45:27.284024 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:45:27.284032 | orchestrator | 2026-04-17 07:45:27.284060 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-17 07:45:27.284070 | orchestrator | Friday 17 April 2026 07:45:03 +0000 (0:00:01.373) 0:00:29.093 ********** 2026-04-17 07:45:27.284078 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 07:45:27.284087 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 07:45:27.284095 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 07:45:27.284103 | orchestrator | 2026-04-17 07:45:27.284111 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-17 07:45:27.284118 | orchestrator | Friday 17 April 2026 07:45:05 +0000 (0:00:02.338) 0:00:31.431 ********** 2026-04-17 07:45:27.284127 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 07:45:27.284135 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 07:45:27.284143 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 07:45:27.284150 | orchestrator | 2026-04-17 07:45:27.284158 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-17 07:45:27.284166 | orchestrator | Friday 17 April 2026 07:45:07 +0000 (0:00:02.286) 0:00:33.717 ********** 2026-04-17 07:45:27.284174 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:45:27.284182 | orchestrator | 2026-04-17 07:45:27.284189 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-17 07:45:27.284197 | orchestrator | Friday 17 April 2026 07:45:09 +0000 (0:00:01.763) 0:00:35.481 ********** 2026-04-17 07:45:27.284205 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:45:27.284213 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:45:27.284221 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:45:27.284228 | orchestrator | 2026-04-17 07:45:27.284236 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-17 07:45:27.284244 | orchestrator | Friday 17 April 2026 07:45:11 +0000 (0:00:01.975) 0:00:37.456 ********** 2026-04-17 07:45:27.284251 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:45:27.284259 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:45:27.284267 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:45:27.284275 | orchestrator | 2026-04-17 07:45:27.284282 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-17 07:45:27.284290 | orchestrator | Friday 17 April 2026 07:45:14 +0000 (0:00:02.747) 0:00:40.203 ********** 2026-04-17 07:45:27.284300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:27.284311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:27.284352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:45:27.284363 | orchestrator | 2026-04-17 07:45:27.284371 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-17 07:45:27.284379 | orchestrator | Friday 17 April 2026 07:45:16 +0000 (0:00:02.229) 0:00:42.433 ********** 2026-04-17 07:45:27.284387 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:45:27.284395 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:45:27.284403 | orchestrator | } 2026-04-17 07:45:27.284411 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:45:27.284419 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:45:27.284427 | orchestrator | } 2026-04-17 07:45:27.284435 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:45:27.284442 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:45:27.284450 | orchestrator | } 2026-04-17 07:45:27.284481 | orchestrator | 2026-04-17 07:45:27.284489 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:45:27.284497 | orchestrator | Friday 17 April 2026 07:45:18 +0000 (0:00:01.469) 0:00:43.902 ********** 2026-04-17 07:45:27.284505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:45:27.284514 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:45:27.284522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:45:27.284530 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:45:27.284538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:45:27.284552 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:45:27.284560 | orchestrator | 2026-04-17 07:45:27.284568 | orchestrator | TASK [grafana : Stopping all Grafana instances but the first node] ************* 2026-04-17 07:45:27.284575 | orchestrator | Friday 17 April 2026 07:45:19 +0000 (0:00:01.382) 0:00:45.285 ********** 2026-04-17 07:45:27.284583 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:45:27.284591 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:45:27.284599 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:45:27.284607 | orchestrator | 2026-04-17 07:45:27.284614 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 07:45:27.284626 | orchestrator | Friday 17 April 2026 07:45:26 +0000 (0:00:06.993) 0:00:52.279 ********** 2026-04-17 07:45:27.284634 | orchestrator | 2026-04-17 07:45:27.284642 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 07:45:27.284650 | orchestrator | Friday 17 April 2026 07:45:27 +0000 (0:00:00.459) 0:00:52.738 ********** 2026-04-17 07:45:27.284658 | orchestrator | 2026-04-17 07:45:27.284671 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 07:47:09.295919 | orchestrator | Friday 17 April 2026 07:45:27 +0000 (0:00:00.614) 0:00:53.352 ********** 2026-04-17 07:47:09.296005 | orchestrator | 2026-04-17 07:47:09.296013 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-17 07:47:09.296018 | orchestrator | Friday 17 April 2026 07:45:28 +0000 (0:00:00.820) 0:00:54.173 ********** 2026-04-17 07:47:09.296023 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:47:09.296029 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:47:09.296034 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:47:09.296038 | orchestrator | 2026-04-17 07:47:09.296043 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-17 07:47:09.296048 | orchestrator | Friday 17 April 2026 07:46:06 +0000 (0:00:38.431) 0:01:32.604 ********** 2026-04-17 07:47:09.296053 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:47:09.296057 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:47:09.296062 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-17 07:47:09.296068 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-17 07:47:09.296072 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:47:09.296078 | orchestrator | 2026-04-17 07:47:09.296082 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-17 07:47:09.296087 | orchestrator | Friday 17 April 2026 07:46:34 +0000 (0:00:27.461) 0:02:00.066 ********** 2026-04-17 07:47:09.296092 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:47:09.296096 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:47:09.296101 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:47:09.296105 | orchestrator | 2026-04-17 07:47:09.296110 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:47:09.296116 | orchestrator | testbed-node-0 : ok=19  changed=6  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:47:09.296122 | orchestrator | testbed-node-1 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:47:09.296126 | orchestrator | testbed-node-2 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:47:09.296148 | orchestrator | 2026-04-17 07:47:09.296153 | orchestrator | 2026-04-17 07:47:09.296158 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:47:09.296162 | orchestrator | Friday 17 April 2026 07:47:08 +0000 (0:00:34.614) 0:02:34.680 ********** 2026-04-17 07:47:09.296167 | orchestrator | =============================================================================== 2026-04-17 07:47:09.296171 | orchestrator | grafana : Restart first grafana container ------------------------------ 38.43s 2026-04-17 07:47:09.296176 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.61s 2026-04-17 07:47:09.296180 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.46s 2026-04-17 07:47:09.296185 | orchestrator | grafana : Stopping all Grafana instances but the first node ------------- 6.99s 2026-04-17 07:47:09.296189 | orchestrator | grafana : include_tasks ------------------------------------------------- 3.27s 2026-04-17 07:47:09.296194 | orchestrator | grafana : Checking if Grafana container needs upgrading ----------------- 3.18s 2026-04-17 07:47:09.296198 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 2.75s 2026-04-17 07:47:09.296203 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 2.49s 2026-04-17 07:47:09.296207 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 2.34s 2026-04-17 07:47:09.296211 | orchestrator | grafana : Copying over config.json files -------------------------------- 2.29s 2026-04-17 07:47:09.296216 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 2.29s 2026-04-17 07:47:09.296220 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.26s 2026-04-17 07:47:09.296225 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 2.26s 2026-04-17 07:47:09.296229 | orchestrator | service-check-containers : grafana | Check containers ------------------- 2.23s 2026-04-17 07:47:09.296234 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 1.98s 2026-04-17 07:47:09.296238 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.96s 2026-04-17 07:47:09.296243 | orchestrator | grafana : Flush handlers ------------------------------------------------ 1.89s 2026-04-17 07:47:09.296247 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.86s 2026-04-17 07:47:09.296252 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 1.76s 2026-04-17 07:47:09.296256 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.71s 2026-04-17 07:47:09.499615 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/510-clusterapi.sh 2026-04-17 07:47:09.505557 | orchestrator | + set -e 2026-04-17 07:47:09.505622 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 07:47:09.505637 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 07:47:09.505649 | orchestrator | ++ INTERACTIVE=false 2026-04-17 07:47:09.505659 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 07:47:09.505670 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 07:47:09.505699 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 07:47:09.506941 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 07:47:09.513505 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-17 07:47:09.513569 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-17 07:47:09.514266 | orchestrator | ++ semver 10.0.0 8.0.0 2026-04-17 07:47:09.583058 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-17 07:47:09.583149 | orchestrator | + osism apply clusterapi 2026-04-17 07:47:10.899194 | orchestrator | 2026-04-17 07:47:10 | INFO  | Prepare task for execution of clusterapi. 2026-04-17 07:47:10.968091 | orchestrator | 2026-04-17 07:47:10 | INFO  | Task b672c60c-9963-42a6-b589-9b6a5377b59d (clusterapi) was prepared for execution. 2026-04-17 07:47:10.968182 | orchestrator | 2026-04-17 07:47:10 | INFO  | It takes a moment until task b672c60c-9963-42a6-b589-9b6a5377b59d (clusterapi) has been started and output is visible here. 2026-04-17 07:48:15.623351 | orchestrator | 2026-04-17 07:48:15.623551 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-17 07:48:15.623582 | orchestrator | 2026-04-17 07:48:15.623602 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-17 07:48:15.623621 | orchestrator | Friday 17 April 2026 07:47:16 +0000 (0:00:01.500) 0:00:01.500 ********** 2026-04-17 07:48:15.623640 | orchestrator | included: cert_manager for testbed-manager 2026-04-17 07:48:15.623660 | orchestrator | 2026-04-17 07:48:15.623679 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-17 07:48:15.623700 | orchestrator | Friday 17 April 2026 07:47:17 +0000 (0:00:01.814) 0:00:03.314 ********** 2026-04-17 07:48:15.623718 | orchestrator | ok: [testbed-manager] 2026-04-17 07:48:15.623737 | orchestrator | 2026-04-17 07:48:15.623756 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-17 07:48:15.623773 | orchestrator | Friday 17 April 2026 07:47:22 +0000 (0:00:04.616) 0:00:07.931 ********** 2026-04-17 07:48:15.623791 | orchestrator | ok: [testbed-manager] 2026-04-17 07:48:15.623809 | orchestrator | 2026-04-17 07:48:15.623829 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-17 07:48:15.623849 | orchestrator | 2026-04-17 07:48:15.623869 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-17 07:48:15.623888 | orchestrator | Friday 17 April 2026 07:47:27 +0000 (0:00:04.830) 0:00:12.761 ********** 2026-04-17 07:48:15.623909 | orchestrator | ok: [testbed-manager] 2026-04-17 07:48:15.623928 | orchestrator | 2026-04-17 07:48:15.623948 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-17 07:48:15.623962 | orchestrator | Friday 17 April 2026 07:47:29 +0000 (0:00:02.151) 0:00:14.913 ********** 2026-04-17 07:48:15.623975 | orchestrator | ok: [testbed-manager] 2026-04-17 07:48:15.623988 | orchestrator | 2026-04-17 07:48:15.624000 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-17 07:48:15.624013 | orchestrator | Friday 17 April 2026 07:47:30 +0000 (0:00:01.160) 0:00:16.073 ********** 2026-04-17 07:48:15.624026 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:48:15.624039 | orchestrator | 2026-04-17 07:48:15.624052 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-17 07:48:15.624064 | orchestrator | Friday 17 April 2026 07:47:31 +0000 (0:00:01.139) 0:00:17.212 ********** 2026-04-17 07:48:15.624076 | orchestrator | ok: [testbed-manager] 2026-04-17 07:48:15.624089 | orchestrator | 2026-04-17 07:48:15.624101 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-17 07:48:15.624113 | orchestrator | Friday 17 April 2026 07:48:11 +0000 (0:00:39.845) 0:00:57.058 ********** 2026-04-17 07:48:15.624126 | orchestrator | changed: [testbed-manager] 2026-04-17 07:48:15.624138 | orchestrator | 2026-04-17 07:48:15.624151 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:48:15.624165 | orchestrator | testbed-manager : ok=7  changed=1  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 07:48:15.624177 | orchestrator | 2026-04-17 07:48:15.624188 | orchestrator | 2026-04-17 07:48:15.624198 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:48:15.624209 | orchestrator | Friday 17 April 2026 07:48:15 +0000 (0:00:03.585) 0:01:00.643 ********** 2026-04-17 07:48:15.624220 | orchestrator | =============================================================================== 2026-04-17 07:48:15.624231 | orchestrator | Upgrade the CAPI management cluster ------------------------------------ 39.85s 2026-04-17 07:48:15.624241 | orchestrator | cert_manager : Deploy cert-manager -------------------------------------- 4.83s 2026-04-17 07:48:15.624252 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 4.62s 2026-04-17 07:48:15.624263 | orchestrator | Install openstack-resource-controller ----------------------------------- 3.59s 2026-04-17 07:48:15.624273 | orchestrator | Get capi-system namespace phase ----------------------------------------- 2.15s 2026-04-17 07:48:15.624284 | orchestrator | Include cert_manager role ----------------------------------------------- 1.81s 2026-04-17 07:48:15.624337 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 1.16s 2026-04-17 07:48:15.624365 | orchestrator | Initialize the CAPI management cluster ---------------------------------- 1.14s 2026-04-17 07:48:15.838377 | orchestrator | + osism apply -a upgrade magnum 2026-04-17 07:48:17.183789 | orchestrator | 2026-04-17 07:48:17 | INFO  | Prepare task for execution of magnum. 2026-04-17 07:48:17.257824 | orchestrator | 2026-04-17 07:48:17 | INFO  | Task c8bb2f93-ffc3-449e-9cec-07e8a87ca7cf (magnum) was prepared for execution. 2026-04-17 07:48:17.257921 | orchestrator | 2026-04-17 07:48:17 | INFO  | It takes a moment until task c8bb2f93-ffc3-449e-9cec-07e8a87ca7cf (magnum) has been started and output is visible here. 2026-04-17 07:48:38.656260 | orchestrator | 2026-04-17 07:48:38.656367 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:48:38.656384 | orchestrator | 2026-04-17 07:48:38.656413 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:48:38.656424 | orchestrator | Friday 17 April 2026 07:48:22 +0000 (0:00:01.865) 0:00:01.866 ********** 2026-04-17 07:48:38.656472 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:48:38.656485 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:48:38.656495 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:48:38.656506 | orchestrator | 2026-04-17 07:48:38.656518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:48:38.656529 | orchestrator | Friday 17 April 2026 07:48:24 +0000 (0:00:01.801) 0:00:03.667 ********** 2026-04-17 07:48:38.656540 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-17 07:48:38.656551 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-17 07:48:38.656561 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-17 07:48:38.656572 | orchestrator | 2026-04-17 07:48:38.656583 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-17 07:48:38.656594 | orchestrator | 2026-04-17 07:48:38.656605 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-17 07:48:38.656615 | orchestrator | Friday 17 April 2026 07:48:26 +0000 (0:00:01.890) 0:00:05.557 ********** 2026-04-17 07:48:38.656626 | orchestrator | included: /ansible/roles/magnum/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:48:38.656639 | orchestrator | 2026-04-17 07:48:38.656649 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-17 07:48:38.656660 | orchestrator | Friday 17 April 2026 07:48:29 +0000 (0:00:03.324) 0:00:08.882 ********** 2026-04-17 07:48:38.656676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:38.656693 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:38.656751 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:38.656766 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:38.656779 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:38.656791 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:38.656811 | orchestrator | 2026-04-17 07:48:38.656824 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-17 07:48:38.656836 | orchestrator | Friday 17 April 2026 07:48:32 +0000 (0:00:03.000) 0:00:11.882 ********** 2026-04-17 07:48:38.656849 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:48:38.656862 | orchestrator | 2026-04-17 07:48:38.656875 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-17 07:48:38.656887 | orchestrator | Friday 17 April 2026 07:48:33 +0000 (0:00:01.143) 0:00:13.026 ********** 2026-04-17 07:48:38.656899 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:48:38.656911 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:48:38.656923 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:48:38.656935 | orchestrator | 2026-04-17 07:48:38.656947 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-17 07:48:38.656959 | orchestrator | Friday 17 April 2026 07:48:34 +0000 (0:00:01.373) 0:00:14.400 ********** 2026-04-17 07:48:38.656971 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 07:48:38.656983 | orchestrator | 2026-04-17 07:48:38.656995 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-17 07:48:38.657008 | orchestrator | Friday 17 April 2026 07:48:37 +0000 (0:00:02.323) 0:00:16.724 ********** 2026-04-17 07:48:38.657033 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:46.462118 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:46.462216 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:46.462256 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:46.462269 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:46.462314 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:46.462328 | orchestrator | 2026-04-17 07:48:46.462340 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-17 07:48:46.462353 | orchestrator | Friday 17 April 2026 07:48:40 +0000 (0:00:03.709) 0:00:20.434 ********** 2026-04-17 07:48:46.462364 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:48:46.462376 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:48:46.462387 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:48:46.462398 | orchestrator | 2026-04-17 07:48:46.462409 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-17 07:48:46.462420 | orchestrator | Friday 17 April 2026 07:48:42 +0000 (0:00:01.598) 0:00:22.033 ********** 2026-04-17 07:48:46.462470 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:48:46.462482 | orchestrator | 2026-04-17 07:48:46.462493 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-17 07:48:46.462504 | orchestrator | Friday 17 April 2026 07:48:44 +0000 (0:00:01.867) 0:00:23.901 ********** 2026-04-17 07:48:46.462516 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:46.462537 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:46.462555 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:46.462577 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:50.124302 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:50.124420 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:48:50.124489 | orchestrator | 2026-04-17 07:48:50.124501 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-17 07:48:50.124511 | orchestrator | Friday 17 April 2026 07:48:47 +0000 (0:00:03.374) 0:00:27.275 ********** 2026-04-17 07:48:50.124524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:48:50.124548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:48:50.124558 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:48:50.124586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:48:50.124606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:48:50.124617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:48:50.124626 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:48:50.124635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:48:50.124644 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:48:50.124653 | orchestrator | 2026-04-17 07:48:50.124662 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-17 07:48:50.124675 | orchestrator | Friday 17 April 2026 07:48:49 +0000 (0:00:01.860) 0:00:29.135 ********** 2026-04-17 07:48:50.124691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:48:54.412488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:48:54.412590 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:48:54.412608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:48:54.412621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:48:54.412632 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:48:54.412658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:48:54.412708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:48:54.412720 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:48:54.412731 | orchestrator | 2026-04-17 07:48:54.412741 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-17 07:48:54.412752 | orchestrator | Friday 17 April 2026 07:48:52 +0000 (0:00:02.403) 0:00:31.539 ********** 2026-04-17 07:48:54.412762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:54.412773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:54.412791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:48:54.412817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:02.625388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:02.625582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:02.625601 | orchestrator | 2026-04-17 07:49:02.625614 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-17 07:49:02.625627 | orchestrator | Friday 17 April 2026 07:48:55 +0000 (0:00:03.510) 0:00:35.050 ********** 2026-04-17 07:49:02.625657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:49:02.625692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:49:02.625725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:49:02.625738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:02.625750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:02.625767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:02.625786 | orchestrator | 2026-04-17 07:49:02.625797 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-17 07:49:02.625808 | orchestrator | Friday 17 April 2026 07:49:02 +0000 (0:00:06.680) 0:00:41.731 ********** 2026-04-17 07:49:02.625827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:49:06.809419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:49:06.809569 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:49:06.809589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:49:06.809620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:49:06.809655 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:49:06.809668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:49:06.809699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:49:06.809711 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:49:06.809723 | orchestrator | 2026-04-17 07:49:06.809734 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-17 07:49:06.809746 | orchestrator | Friday 17 April 2026 07:49:04 +0000 (0:00:02.214) 0:00:43.946 ********** 2026-04-17 07:49:06.809758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:49:06.809776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:49:06.809797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-17 07:49:06.809818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:34.231655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:34.231768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 07:49:34.231804 | orchestrator | 2026-04-17 07:49:34.231817 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-17 07:49:34.231828 | orchestrator | Friday 17 April 2026 07:49:08 +0000 (0:00:03.678) 0:00:47.625 ********** 2026-04-17 07:49:34.231839 | orchestrator | changed: [testbed-node-0] => { 2026-04-17 07:49:34.231850 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:49:34.231860 | orchestrator | } 2026-04-17 07:49:34.231870 | orchestrator | changed: [testbed-node-1] => { 2026-04-17 07:49:34.231879 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:49:34.231889 | orchestrator | } 2026-04-17 07:49:34.231898 | orchestrator | changed: [testbed-node-2] => { 2026-04-17 07:49:34.231907 | orchestrator |  "msg": "Notifying handlers" 2026-04-17 07:49:34.231917 | orchestrator | } 2026-04-17 07:49:34.231927 | orchestrator | 2026-04-17 07:49:34.231938 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-17 07:49:34.231947 | orchestrator | Friday 17 April 2026 07:49:09 +0000 (0:00:01.411) 0:00:49.037 ********** 2026-04-17 07:49:34.231974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:49:34.231986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:49:34.231997 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:49:34.232025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:49:34.232037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:49:34.232055 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:49:34.232071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-17 07:49:34.232082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 07:49:34.232092 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:49:34.232102 | orchestrator | 2026-04-17 07:49:34.232111 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-17 07:49:34.232121 | orchestrator | Friday 17 April 2026 07:49:11 +0000 (0:00:02.386) 0:00:51.423 ********** 2026-04-17 07:49:34.232131 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:49:34.232140 | orchestrator | 2026-04-17 07:49:34.232149 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 07:49:34.232159 | orchestrator | Friday 17 April 2026 07:49:33 +0000 (0:00:21.816) 0:01:13.240 ********** 2026-04-17 07:49:34.232169 | orchestrator | 2026-04-17 07:49:34.232178 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 07:49:34.232195 | orchestrator | Friday 17 April 2026 07:49:34 +0000 (0:00:00.439) 0:01:13.680 ********** 2026-04-17 07:50:27.004086 | orchestrator | 2026-04-17 07:50:27.004207 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 07:50:27.004224 | orchestrator | Friday 17 April 2026 07:49:34 +0000 (0:00:00.439) 0:01:14.120 ********** 2026-04-17 07:50:27.004236 | orchestrator | 2026-04-17 07:50:27.004248 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-17 07:50:27.004259 | orchestrator | Friday 17 April 2026 07:49:35 +0000 (0:00:00.808) 0:01:14.928 ********** 2026-04-17 07:50:27.004271 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:50:27.004310 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:50:27.004322 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:50:27.004334 | orchestrator | 2026-04-17 07:50:27.004345 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-17 07:50:27.004356 | orchestrator | Friday 17 April 2026 07:49:58 +0000 (0:00:22.542) 0:01:37.470 ********** 2026-04-17 07:50:27.004367 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:50:27.004378 | orchestrator | changed: [testbed-node-1] 2026-04-17 07:50:27.004389 | orchestrator | changed: [testbed-node-2] 2026-04-17 07:50:27.004400 | orchestrator | 2026-04-17 07:50:27.004411 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:50:27.004423 | orchestrator | testbed-node-0 : ok=16  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 07:50:27.004435 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 07:50:27.004446 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 07:50:27.004457 | orchestrator | 2026-04-17 07:50:27.004468 | orchestrator | 2026-04-17 07:50:27.004524 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:50:27.004537 | orchestrator | Friday 17 April 2026 07:50:26 +0000 (0:00:28.655) 0:02:06.126 ********** 2026-04-17 07:50:27.004548 | orchestrator | =============================================================================== 2026-04-17 07:50:27.004559 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 28.66s 2026-04-17 07:50:27.004570 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 22.54s 2026-04-17 07:50:27.004580 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 21.82s 2026-04-17 07:50:27.004591 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.68s 2026-04-17 07:50:27.004601 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.71s 2026-04-17 07:50:27.004612 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.68s 2026-04-17 07:50:27.004624 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.51s 2026-04-17 07:50:27.004637 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.38s 2026-04-17 07:50:27.004649 | orchestrator | magnum : include_tasks -------------------------------------------------- 3.32s 2026-04-17 07:50:27.004678 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.00s 2026-04-17 07:50:27.004691 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.40s 2026-04-17 07:50:27.004703 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.39s 2026-04-17 07:50:27.004715 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.32s 2026-04-17 07:50:27.004728 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.22s 2026-04-17 07:50:27.004740 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.89s 2026-04-17 07:50:27.004753 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.87s 2026-04-17 07:50:27.004765 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 1.86s 2026-04-17 07:50:27.004779 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.80s 2026-04-17 07:50:27.004792 | orchestrator | magnum : Flush handlers ------------------------------------------------- 1.69s 2026-04-17 07:50:27.004805 | orchestrator | magnum : Set magnum kubeconfig file's path ------------------------------ 1.60s 2026-04-17 07:50:27.974730 | orchestrator | ok: Runtime: 2:41:15.165364 2026-04-17 07:50:28.470260 | 2026-04-17 07:50:28.470399 | TASK [Bootstrap services] 2026-04-17 07:50:29.028965 | orchestrator | skipping: Conditional result was False 2026-04-17 07:50:29.050605 | 2026-04-17 07:50:29.050753 | TASK [Run checks after the upgrade] 2026-04-17 07:50:29.735678 | orchestrator | + set -e 2026-04-17 07:50:29.735861 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 07:50:29.735888 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 07:50:29.735909 | orchestrator | ++ INTERACTIVE=false 2026-04-17 07:50:29.735923 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 07:50:29.735936 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 07:50:29.735950 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 07:50:29.736699 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 07:50:29.742928 | orchestrator | 2026-04-17 07:50:29.742976 | orchestrator | # CHECK 2026-04-17 07:50:29.742989 | orchestrator | 2026-04-17 07:50:29.743000 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-17 07:50:29.743016 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-17 07:50:29.743028 | orchestrator | + echo 2026-04-17 07:50:29.743039 | orchestrator | + echo '# CHECK' 2026-04-17 07:50:29.743050 | orchestrator | + echo 2026-04-17 07:50:29.743065 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 07:50:29.744154 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-17 07:50:29.808126 | orchestrator | 2026-04-17 07:50:29.808231 | orchestrator | ## Containers @ testbed-manager 2026-04-17 07:50:29.808255 | orchestrator | 2026-04-17 07:50:29.808290 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 07:50:29.808311 | orchestrator | + echo 2026-04-17 07:50:29.808331 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-17 07:50:29.808351 | orchestrator | + echo 2026-04-17 07:50:29.808372 | orchestrator | + osism container testbed-manager ps 2026-04-17 07:50:31.293644 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 07:50:31.293747 | orchestrator | 07731610989a registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_blackbox_exporter 2026-04-17 07:50:31.293760 | orchestrator | e852de24018e registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_alertmanager 2026-04-17 07:50:31.293766 | orchestrator | aea51cc2befe registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 7 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-17 07:50:31.293771 | orchestrator | 8007aeb8a51b registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-17 07:50:31.293776 | orchestrator | ad86d60001fb registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_server 2026-04-17 07:50:31.293781 | orchestrator | 713e847586a6 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-17 07:50:31.293790 | orchestrator | 223b42eeeb96 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-17 07:50:31.293796 | orchestrator | a12c540ee7d5 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-17 07:50:31.293820 | orchestrator | 86f0a86633e0 registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-04-17 07:50:31.293826 | orchestrator | 65a02ce0f08f registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" 3 hours ago Up 3 hours (healthy) manager-inventory_reconciler-1 2026-04-17 07:50:31.293831 | orchestrator | 8a54b7cfb09a registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-ansible 2026-04-17 07:50:31.293836 | orchestrator | 5ed9e5e3af50 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" 3 hours ago Up 3 hours (healthy) osismclient 2026-04-17 07:50:31.293841 | orchestrator | 846ad62154d6 registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) ceph-ansible 2026-04-17 07:50:31.293863 | orchestrator | da44addfd898 registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) kolla-ansible 2026-04-17 07:50:31.293868 | orchestrator | ec4222714567 registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-kubernetes 2026-04-17 07:50:31.293873 | orchestrator | 94066cd1dd83 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-openstack-1 2026-04-17 07:50:31.293878 | orchestrator | 94a7f683c4e7 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-17 07:50:31.293883 | orchestrator | 30d6ea7e1f2d registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up About an hour (healthy) manager-listener-1 2026-04-17 07:50:31.293888 | orchestrator | 4f2010a41011 registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" 3 hours ago Up 3 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-17 07:50:31.293893 | orchestrator | 8423bebf293b registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-flower-1 2026-04-17 07:50:31.293898 | orchestrator | 228b385bda9d registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-beat-1 2026-04-17 07:50:31.293903 | orchestrator | cf90df4b9e0c registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 4 hours ago Up 4 hours cephclient 2026-04-17 07:50:31.293913 | orchestrator | ba8bc04417fc phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 5 hours ago Up 5 hours (healthy) 80/tcp phpmyadmin 2026-04-17 07:50:31.293918 | orchestrator | 15be83bd3eec registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 5 hours ago Up 5 hours (healthy) 8080/tcp homer 2026-04-17 07:50:31.293923 | orchestrator | f3ccb8ec2176 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 5 hours ago Up 5 hours 80/tcp cgit 2026-04-17 07:50:31.293927 | orchestrator | 80774d88d876 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 5 hours ago Up 5 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-17 07:50:31.293935 | orchestrator | 79af0cab1be0 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 5 hours ago Up 3 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-17 07:50:31.293940 | orchestrator | e7f7d2f5506c registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 5 hours ago Up 3 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-17 07:50:31.293945 | orchestrator | afb2a1e0ac95 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 5 hours ago Up 3 hours (healthy) 6379/tcp manager-redis-1 2026-04-17 07:50:31.293954 | orchestrator | 86728f39d63f registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 5 hours ago Up 5 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-17 07:50:31.443422 | orchestrator | 2026-04-17 07:50:31.443546 | orchestrator | ## Images @ testbed-manager 2026-04-17 07:50:31.443563 | orchestrator | 2026-04-17 07:50:31.443575 | orchestrator | + echo 2026-04-17 07:50:31.443587 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-17 07:50:31.443599 | orchestrator | + echo 2026-04-17 07:50:31.443610 | orchestrator | + osism container testbed-manager images 2026-04-17 07:50:32.937725 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 07:50:32.937834 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 bcb97a3bca9f 4 hours ago 212MB 2026-04-17 07:50:32.937851 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 9e238fdcbaa6 28 hours ago 238MB 2026-04-17 07:50:32.937872 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20260328.0 38f6ca42e9a0 2 weeks ago 635MB 2026-04-17 07:50:32.937884 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 2 weeks ago 590MB 2026-04-17 07:50:32.937896 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 2 weeks ago 683MB 2026-04-17 07:50:32.937908 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 2 weeks ago 277MB 2026-04-17 07:50:32.937919 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter 0.25.0.20260328 1bf017fd7bf3 2 weeks ago 319MB 2026-04-17 07:50:32.937961 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager 0.28.1.20260328 d1986023a383 2 weeks ago 415MB 2026-04-17 07:50:32.937973 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 2 weeks ago 368MB 2026-04-17 07:50:32.937985 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-server 3.2.1.20260328 4f5732d5eb69 2 weeks ago 860MB 2026-04-17 07:50:32.937996 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 2 weeks ago 317MB 2026-04-17 07:50:32.938007 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20260322.0 3e18c5de9bc5 3 weeks ago 634MB 2026-04-17 07:50:32.938072 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20260322.0 c68c1f5728ae 3 weeks ago 1.24GB 2026-04-17 07:50:32.938088 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20260322.0 f6e7e0d58bb1 3 weeks ago 585MB 2026-04-17 07:50:32.938099 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20260322.0 9806642932fd 3 weeks ago 357MB 2026-04-17 07:50:32.938110 | orchestrator | registry.osism.tech/osism/osism 0.20260320.0 5d0420989a40 3 weeks ago 408MB 2026-04-17 07:50:32.938121 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20260320.0 80b833af5991 3 weeks ago 232MB 2026-04-17 07:50:32.938132 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-17 07:50:32.938156 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-17 07:50:32.938167 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-17 07:50:32.938187 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-17 07:50:32.938198 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-17 07:50:32.938209 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-17 07:50:32.938220 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-17 07:50:32.938230 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-17 07:50:32.938241 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-17 07:50:32.938253 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-17 07:50:32.938264 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-17 07:50:32.938275 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-17 07:50:32.938285 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-17 07:50:32.938297 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-17 07:50:32.938328 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-17 07:50:32.938340 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-17 07:50:32.938352 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-17 07:50:32.938372 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 5 months ago 334MB 2026-04-17 07:50:32.938383 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-17 07:50:32.938394 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-17 07:50:32.938404 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-17 07:50:32.938415 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-17 07:50:32.938426 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-17 07:50:32.938437 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-17 07:50:33.102701 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 07:50:33.102826 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-17 07:50:33.152795 | orchestrator | 2026-04-17 07:50:33.152901 | orchestrator | ## Containers @ testbed-node-0 2026-04-17 07:50:33.152917 | orchestrator | 2026-04-17 07:50:33.152929 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 07:50:33.152940 | orchestrator | + echo 2026-04-17 07:50:33.152951 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-17 07:50:33.152963 | orchestrator | + echo 2026-04-17 07:50:33.152975 | orchestrator | + osism container testbed-node-0 ps 2026-04-17 07:50:34.688907 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 07:50:34.689036 | orchestrator | 9e25421d1360 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 15 seconds ago Up 14 seconds (health: starting) magnum_conductor 2026-04-17 07:50:34.689068 | orchestrator | 5d0fd615fd15 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 49 seconds ago Up 48 seconds (healthy) magnum_api 2026-04-17 07:50:34.689075 | orchestrator | 70b366d9ce48 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 4 minutes ago Up 4 minutes grafana 2026-04-17 07:50:34.689081 | orchestrator | f2e5c7bd295d registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-17 07:50:34.689089 | orchestrator | 04c25ca960a3 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_cadvisor 2026-04-17 07:50:34.689095 | orchestrator | 26d016e6e846 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-17 07:50:34.689101 | orchestrator | c88c6419c787 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-17 07:50:34.689107 | orchestrator | a6e56581f2a2 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-17 07:50:34.689113 | orchestrator | de0ea450d770 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-17 07:50:34.689135 | orchestrator | 7980225df188 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-17 07:50:34.689152 | orchestrator | 274a5501d0a0 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-17 07:50:34.689158 | orchestrator | 4dd43a988cf9 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-17 07:50:34.689164 | orchestrator | 420a15c2a6c8 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-17 07:50:34.689170 | orchestrator | e60dd4372eae registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-17 07:50:34.689176 | orchestrator | e95e8497818d registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-17 07:50:34.689181 | orchestrator | fac87aa84242 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes octavia_driver_agent 2026-04-17 07:50:34.689187 | orchestrator | a90bafff52d1 registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-17 07:50:34.689207 | orchestrator | e0f082549b76 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-17 07:50:34.689213 | orchestrator | d3b97306a95b registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_listener 2026-04-17 07:50:34.689219 | orchestrator | 27eb2d5f57ea registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_evaluator 2026-04-17 07:50:34.689225 | orchestrator | dc98e35001bd registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-17 07:50:34.689239 | orchestrator | 800a3eb79a16 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes ceilometer_central 2026-04-17 07:50:34.689245 | orchestrator | a44c0c373579 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-17 07:50:34.689250 | orchestrator | 71cd26586b35 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-17 07:50:34.689264 | orchestrator | 02411d56f4b6 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-17 07:50:34.689272 | orchestrator | 32ef5f7eefd7 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-17 07:50:34.689283 | orchestrator | 066c4c30f5d8 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-17 07:50:34.689289 | orchestrator | db18c6343868 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-17 07:50:34.689294 | orchestrator | da950902e6e7 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-17 07:50:34.689300 | orchestrator | 3177b0cdbff5 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-17 07:50:34.689306 | orchestrator | 31c22ff77b0e registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-17 07:50:34.689311 | orchestrator | 32907b86e49e registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-17 07:50:34.689317 | orchestrator | 1844bccf6c58 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-17 07:50:34.689323 | orchestrator | 998ed96d1cba registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-17 07:50:34.689328 | orchestrator | aef0406b2fc2 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-17 07:50:34.689334 | orchestrator | 2092e2191e8a registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-17 07:50:34.689344 | orchestrator | f066b29fbe14 registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-17 07:50:34.689350 | orchestrator | b36d397c50f6 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-17 07:50:34.689356 | orchestrator | 7bb0fefaf855 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-17 07:50:34.689361 | orchestrator | c9662a8d09e7 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) horizon 2026-04-17 07:50:34.689367 | orchestrator | 659a61b0ab17 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 49 minutes (healthy) nova_novncproxy 2026-04-17 07:50:34.689373 | orchestrator | 06a7c29f6003 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_conductor 2026-04-17 07:50:34.689384 | orchestrator | a1663f91e290 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-17 07:50:34.689390 | orchestrator | 72442d5185da registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_api 2026-04-17 07:50:34.689900 | orchestrator | cbb694419655 registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_scheduler 2026-04-17 07:50:34.689911 | orchestrator | a58a05c09ee8 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-17 07:50:34.689917 | orchestrator | 168426f0a8b0 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-17 07:50:34.689937 | orchestrator | 67e2970d2532 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-17 07:50:34.689943 | orchestrator | 0df730c639df registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-17 07:50:34.689949 | orchestrator | f7ad7b856d65 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-17 07:50:34.689954 | orchestrator | 5cf8384b44b5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-17 07:50:34.689960 | orchestrator | ccf7f40aa319 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-0 2026-04-17 07:50:34.689966 | orchestrator | b4cdabd05808 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-0 2026-04-17 07:50:34.689971 | orchestrator | c39f86513a20 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-17 07:50:34.689977 | orchestrator | 6f35329bb3ba registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-17 07:50:34.689983 | orchestrator | 9169c8dc1466 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-17 07:50:34.689989 | orchestrator | e2e45a88e013 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_nb_db 2026-04-17 07:50:34.689994 | orchestrator | f1dfd563026b registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_controller 2026-04-17 07:50:34.690000 | orchestrator | 7b591fb1f4ec registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_vswitchd 2026-04-17 07:50:34.690012 | orchestrator | 35863e157a4d registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_db 2026-04-17 07:50:34.690040 | orchestrator | fe251e3486a4 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) rabbitmq 2026-04-17 07:50:34.690048 | orchestrator | 08507f5f2968 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 2 hours ago Up 2 hours (healthy) mariadb 2026-04-17 07:50:34.690057 | orchestrator | 1e7d0f687cd3 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-17 07:50:34.690070 | orchestrator | 283b30c1e175 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-17 07:50:34.690076 | orchestrator | c34a8dc869fb registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-17 07:50:34.690081 | orchestrator | 2bc9beccf960 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-17 07:50:34.690087 | orchestrator | 6aed987a53b3 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-17 07:50:34.690093 | orchestrator | 48a1f1993050 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-17 07:50:34.690098 | orchestrator | cba73efe3dbe registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-17 07:50:34.690104 | orchestrator | 8429e3b2357a registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-17 07:50:34.690110 | orchestrator | c752f1fa531e registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-17 07:50:34.690116 | orchestrator | c489ab9f3089 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-17 07:50:34.690122 | orchestrator | 0106fec924e8 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-17 07:50:34.855303 | orchestrator | 2026-04-17 07:50:34.855423 | orchestrator | ## Images @ testbed-node-0 2026-04-17 07:50:34.855439 | orchestrator | 2026-04-17 07:50:34.855452 | orchestrator | + echo 2026-04-17 07:50:34.855464 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-17 07:50:34.855475 | orchestrator | + echo 2026-04-17 07:50:34.855523 | orchestrator | + osism container testbed-node-0 images 2026-04-17 07:50:36.465913 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 07:50:36.466070 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 2 weeks ago 288MB 2026-04-17 07:50:36.466097 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 2 weeks ago 1.54GB 2026-04-17 07:50:36.466146 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 2 weeks ago 1.57GB 2026-04-17 07:50:36.466164 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 2 weeks ago 590MB 2026-04-17 07:50:36.466179 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 2 weeks ago 277MB 2026-04-17 07:50:36.466197 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 2 weeks ago 1.04GB 2026-04-17 07:50:36.466213 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 2 weeks ago 427MB 2026-04-17 07:50:36.466230 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 2 weeks ago 350MB 2026-04-17 07:50:36.466246 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 2 weeks ago 683MB 2026-04-17 07:50:36.466282 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 2 weeks ago 277MB 2026-04-17 07:50:36.466299 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 2 weeks ago 285MB 2026-04-17 07:50:36.466315 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 2 weeks ago 293MB 2026-04-17 07:50:36.466332 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 2 weeks ago 293MB 2026-04-17 07:50:36.466349 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 2 weeks ago 284MB 2026-04-17 07:50:36.466366 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 2 weeks ago 284MB 2026-04-17 07:50:36.466382 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 2 weeks ago 1.2GB 2026-04-17 07:50:36.466397 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 2 weeks ago 463MB 2026-04-17 07:50:36.466414 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 2 weeks ago 309MB 2026-04-17 07:50:36.466430 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 2 weeks ago 368MB 2026-04-17 07:50:36.466448 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 2 weeks ago 303MB 2026-04-17 07:50:36.466465 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 2 weeks ago 312MB 2026-04-17 07:50:36.466481 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 2 weeks ago 317MB 2026-04-17 07:50:36.466538 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 2 weeks ago 301MB 2026-04-17 07:50:36.466556 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 2 weeks ago 301MB 2026-04-17 07:50:36.466573 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 2 weeks ago 301MB 2026-04-17 07:50:36.466590 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 2 weeks ago 301MB 2026-04-17 07:50:36.466605 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 2 weeks ago 1.09GB 2026-04-17 07:50:36.466628 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 2 weeks ago 1.06GB 2026-04-17 07:50:36.466640 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 2 weeks ago 1.05GB 2026-04-17 07:50:36.466673 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 2 weeks ago 997MB 2026-04-17 07:50:36.466686 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 2 weeks ago 996MB 2026-04-17 07:50:36.466697 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 2 weeks ago 1.07GB 2026-04-17 07:50:36.466707 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 2 weeks ago 1.07GB 2026-04-17 07:50:36.466718 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 2 weeks ago 1.05GB 2026-04-17 07:50:36.466729 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 2 weeks ago 1.05GB 2026-04-17 07:50:36.466739 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 2 weeks ago 1.05GB 2026-04-17 07:50:36.466750 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 2 weeks ago 996MB 2026-04-17 07:50:36.466761 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 2 weeks ago 995MB 2026-04-17 07:50:36.466772 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 2 weeks ago 995MB 2026-04-17 07:50:36.466783 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 2 weeks ago 995MB 2026-04-17 07:50:36.466802 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 2 weeks ago 994MB 2026-04-17 07:50:36.466812 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 2 weeks ago 1.12GB 2026-04-17 07:50:36.466821 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 2 weeks ago 1.79GB 2026-04-17 07:50:36.466831 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 2 weeks ago 1.43GB 2026-04-17 07:50:36.466840 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 2 weeks ago 1.43GB 2026-04-17 07:50:36.466850 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 2 weeks ago 1.44GB 2026-04-17 07:50:36.466859 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 2 weeks ago 1.24GB 2026-04-17 07:50:36.466869 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 2 weeks ago 1.07GB 2026-04-17 07:50:36.468391 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 2 weeks ago 1.02GB 2026-04-17 07:50:36.468424 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 2 weeks ago 1GB 2026-04-17 07:50:36.468440 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 2 weeks ago 1GB 2026-04-17 07:50:36.468456 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 2 weeks ago 1GB 2026-04-17 07:50:36.468514 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 2 weeks ago 1.27GB 2026-04-17 07:50:36.468531 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 2 weeks ago 1.15GB 2026-04-17 07:50:36.468547 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 2 weeks ago 1.01GB 2026-04-17 07:50:36.468563 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 2 weeks ago 1GB 2026-04-17 07:50:36.468585 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 2 weeks ago 1GB 2026-04-17 07:50:36.468603 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 2 weeks ago 1.01GB 2026-04-17 07:50:36.468620 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 2 weeks ago 1GB 2026-04-17 07:50:36.468636 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 2 weeks ago 1GB 2026-04-17 07:50:36.468652 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 2 weeks ago 1.23GB 2026-04-17 07:50:36.468670 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 2 weeks ago 1.39GB 2026-04-17 07:50:36.468687 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 2 weeks ago 1.23GB 2026-04-17 07:50:36.468705 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 2 weeks ago 1.23GB 2026-04-17 07:50:36.468722 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 2 weeks ago 1.07GB 2026-04-17 07:50:36.468740 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 2 weeks ago 1.07GB 2026-04-17 07:50:36.468757 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 2 weeks ago 1.07GB 2026-04-17 07:50:36.468771 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 2 weeks ago 1.24GB 2026-04-17 07:50:36.468787 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 2 weeks ago 301MB 2026-04-17 07:50:36.468803 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-17 07:50:36.468820 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-17 07:50:36.468837 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-17 07:50:36.468852 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-17 07:50:36.468869 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-17 07:50:36.468897 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-17 07:50:36.468913 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-17 07:50:36.468930 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-17 07:50:36.468957 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-17 07:50:36.468974 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-17 07:50:36.469004 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-17 07:50:36.469021 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-17 07:50:36.469039 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-17 07:50:36.469055 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-17 07:50:36.469072 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-17 07:50:36.469088 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-17 07:50:36.469104 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-17 07:50:36.469119 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-17 07:50:36.469135 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-17 07:50:36.469152 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-17 07:50:36.469168 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-17 07:50:36.469184 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-17 07:50:36.469201 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-17 07:50:36.469217 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-17 07:50:36.469234 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-17 07:50:36.469245 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-17 07:50:36.469255 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-17 07:50:36.469264 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-17 07:50:36.469274 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-17 07:50:36.469284 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-17 07:50:36.469293 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-17 07:50:36.469303 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-17 07:50:36.469312 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-17 07:50:36.469329 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-17 07:50:36.469339 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-17 07:50:36.469349 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-17 07:50:36.469358 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-17 07:50:36.469367 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-17 07:50:36.469377 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-17 07:50:36.469386 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-17 07:50:36.469403 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-17 07:50:36.469413 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-17 07:50:36.469423 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-17 07:50:36.469432 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-17 07:50:36.469451 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-17 07:50:36.469462 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-17 07:50:36.469471 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-17 07:50:36.469481 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-17 07:50:36.469512 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-17 07:50:36.469523 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-17 07:50:36.469533 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-17 07:50:36.469542 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-17 07:50:36.469552 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-17 07:50:36.469561 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-17 07:50:36.469570 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-17 07:50:36.469580 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-17 07:50:36.469590 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-17 07:50:36.469599 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-17 07:50:36.469615 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-17 07:50:36.469625 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-17 07:50:36.469634 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-17 07:50:36.469643 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-17 07:50:36.469653 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-17 07:50:36.469663 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-17 07:50:36.469672 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-17 07:50:36.469682 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-17 07:50:36.469691 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-17 07:50:36.469701 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-17 07:50:36.469710 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-17 07:50:36.618618 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 07:50:36.619529 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-17 07:50:36.678110 | orchestrator | 2026-04-17 07:50:36.678193 | orchestrator | ## Containers @ testbed-node-1 2026-04-17 07:50:36.678205 | orchestrator | 2026-04-17 07:50:36.678215 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 07:50:36.678225 | orchestrator | + echo 2026-04-17 07:50:36.678237 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-17 07:50:36.678248 | orchestrator | + echo 2026-04-17 07:50:36.678258 | orchestrator | + osism container testbed-node-1 ps 2026-04-17 07:50:38.189680 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 07:50:38.189795 | orchestrator | f2b02b1b6501 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 19 seconds ago Up 17 seconds (health: starting) magnum_conductor 2026-04-17 07:50:38.189814 | orchestrator | b88f94390c18 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 43 seconds ago Up 41 seconds (healthy) magnum_api 2026-04-17 07:50:38.189826 | orchestrator | ad3722ba11ef registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-17 07:50:38.189837 | orchestrator | 13cf4da8ea00 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-17 07:50:38.189850 | orchestrator | 11cc575d24f8 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_cadvisor 2026-04-17 07:50:38.189861 | orchestrator | b71b7be71213 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-17 07:50:38.189872 | orchestrator | a1947c4927a9 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-17 07:50:38.189913 | orchestrator | 9eaae2804da5 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-17 07:50:38.189925 | orchestrator | 23907a032163 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-17 07:50:38.189955 | orchestrator | 15fa8072f983 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-17 07:50:38.189967 | orchestrator | 7ed08ba1c0f7 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-17 07:50:38.189978 | orchestrator | d469280c3520 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-17 07:50:38.189989 | orchestrator | c81853b60e19 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-17 07:50:38.189999 | orchestrator | c556496bdd1d registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-17 07:50:38.190010 | orchestrator | 3813093fa7fd registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_health_manager 2026-04-17 07:50:38.190067 | orchestrator | bc5963fa48e9 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes octavia_driver_agent 2026-04-17 07:50:38.190079 | orchestrator | 777c101a99a6 registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-17 07:50:38.190111 | orchestrator | 98d6cc08fee5 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-17 07:50:38.190123 | orchestrator | 80fc410103b8 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_listener 2026-04-17 07:50:38.190133 | orchestrator | 563a5f6a21e9 registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_evaluator 2026-04-17 07:50:38.190150 | orchestrator | 86a96c373a3d registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-17 07:50:38.190161 | orchestrator | 6cb9d80060cf registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes ceilometer_central 2026-04-17 07:50:38.190172 | orchestrator | eb547cd8c262 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-17 07:50:38.190190 | orchestrator | b284bfc2aaec registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-17 07:50:38.190201 | orchestrator | fb77b41f4d36 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-17 07:50:38.190212 | orchestrator | 6fcce57ba598 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-17 07:50:38.190223 | orchestrator | 0e825f9b39fc registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-17 07:50:38.190233 | orchestrator | 6cedcc07ddf3 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-17 07:50:38.190249 | orchestrator | 78edc3beb3d2 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-17 07:50:38.190261 | orchestrator | 9f4a53b3506c registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-17 07:50:38.190271 | orchestrator | 1b20863deaa0 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-17 07:50:38.190282 | orchestrator | 1b32615f6f97 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-17 07:50:38.190293 | orchestrator | 19f31f279922 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-17 07:50:38.190303 | orchestrator | 48bb4e1c5009 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-17 07:50:38.190314 | orchestrator | ab1e850e7d13 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-17 07:50:38.190324 | orchestrator | 48f52df637f6 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-17 07:50:38.190335 | orchestrator | b82a4a0ca807 registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-17 07:50:38.190346 | orchestrator | ef4de9d2640d registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-17 07:50:38.190356 | orchestrator | edb5eba3f58e registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-17 07:50:38.190367 | orchestrator | 36647fd1390e registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) horizon 2026-04-17 07:50:38.190389 | orchestrator | 70f9729b5232 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 49 minutes (healthy) nova_novncproxy 2026-04-17 07:50:38.190401 | orchestrator | f98f9367191a registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_conductor 2026-04-17 07:50:38.190411 | orchestrator | 7d414cf58784 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-17 07:50:38.190422 | orchestrator | e641cfef6053 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_api 2026-04-17 07:50:38.190432 | orchestrator | a7655ce645f7 registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_scheduler 2026-04-17 07:50:38.190443 | orchestrator | 1545783402fe registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-17 07:50:38.190453 | orchestrator | 9cb4b79e5216 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-17 07:50:38.190464 | orchestrator | 4861d3c19751 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-17 07:50:38.190483 | orchestrator | 3ebfe501db76 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-17 07:50:38.190531 | orchestrator | 5ed77821f15c registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-17 07:50:38.190543 | orchestrator | 57457c715050 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-17 07:50:38.190554 | orchestrator | 0934db37a37a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-1 2026-04-17 07:50:38.190565 | orchestrator | 293a28d17cc6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-1 2026-04-17 07:50:38.190576 | orchestrator | 183c21a9b280 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-17 07:50:38.190587 | orchestrator | c5da51ce91e5 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-17 07:50:38.190598 | orchestrator | 62b4f806c422 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-17 07:50:38.190609 | orchestrator | 7c02a0c53417 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_nb_db 2026-04-17 07:50:38.190627 | orchestrator | 87c8a3c46ff8 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_controller 2026-04-17 07:50:38.190638 | orchestrator | 536417e23ea5 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_vswitchd 2026-04-17 07:50:38.190649 | orchestrator | ba39b2258f6b registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_db 2026-04-17 07:50:38.190660 | orchestrator | 86156f5774ed registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) rabbitmq 2026-04-17 07:50:38.190671 | orchestrator | de3f14e26d89 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 2 hours ago Up 2 hours (healthy) mariadb 2026-04-17 07:50:38.190682 | orchestrator | 440e701f6b9e registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-17 07:50:38.190693 | orchestrator | 347a91d50936 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-17 07:50:38.190704 | orchestrator | d53377ede3be registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-17 07:50:38.190714 | orchestrator | 6dc30c079dc9 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-17 07:50:38.190725 | orchestrator | 16f518e2cdd0 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-17 07:50:38.190743 | orchestrator | 8670ae614d1a registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-17 07:50:38.190754 | orchestrator | 40f553e0233a registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-17 07:50:38.190765 | orchestrator | e6330090e112 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-17 07:50:38.190776 | orchestrator | 7f1520f6caa1 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-17 07:50:38.190787 | orchestrator | ea1954f90904 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-17 07:50:38.190798 | orchestrator | 1e57188a8999 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-17 07:50:38.346099 | orchestrator | 2026-04-17 07:50:38.346201 | orchestrator | ## Images @ testbed-node-1 2026-04-17 07:50:38.346215 | orchestrator | 2026-04-17 07:50:38.346254 | orchestrator | + echo 2026-04-17 07:50:38.346266 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-17 07:50:38.346277 | orchestrator | + echo 2026-04-17 07:50:38.346289 | orchestrator | + osism container testbed-node-1 images 2026-04-17 07:50:40.047301 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 07:50:40.047437 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 2 weeks ago 288MB 2026-04-17 07:50:40.047465 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 2 weeks ago 1.54GB 2026-04-17 07:50:40.047589 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 2 weeks ago 1.57GB 2026-04-17 07:50:40.047615 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 2 weeks ago 590MB 2026-04-17 07:50:40.047635 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 2 weeks ago 277MB 2026-04-17 07:50:40.047655 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 2 weeks ago 1.04GB 2026-04-17 07:50:40.047682 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 2 weeks ago 350MB 2026-04-17 07:50:40.047703 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 2 weeks ago 427MB 2026-04-17 07:50:40.047719 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 2 weeks ago 683MB 2026-04-17 07:50:40.047738 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 2 weeks ago 277MB 2026-04-17 07:50:40.047756 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 2 weeks ago 285MB 2026-04-17 07:50:40.047774 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 2 weeks ago 293MB 2026-04-17 07:50:40.047792 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 2 weeks ago 293MB 2026-04-17 07:50:40.047810 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 2 weeks ago 284MB 2026-04-17 07:50:40.047830 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 2 weeks ago 284MB 2026-04-17 07:50:40.047848 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 2 weeks ago 1.2GB 2026-04-17 07:50:40.047867 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 2 weeks ago 463MB 2026-04-17 07:50:40.047886 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 2 weeks ago 309MB 2026-04-17 07:50:40.048015 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 2 weeks ago 368MB 2026-04-17 07:50:40.048042 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 2 weeks ago 303MB 2026-04-17 07:50:40.048058 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 2 weeks ago 312MB 2026-04-17 07:50:40.048070 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 2 weeks ago 317MB 2026-04-17 07:50:40.048090 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 2 weeks ago 301MB 2026-04-17 07:50:40.048141 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 2 weeks ago 301MB 2026-04-17 07:50:40.048161 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 2 weeks ago 301MB 2026-04-17 07:50:40.048177 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 2 weeks ago 301MB 2026-04-17 07:50:40.048194 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 2 weeks ago 1.09GB 2026-04-17 07:50:40.048213 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 2 weeks ago 1.06GB 2026-04-17 07:50:40.048231 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 2 weeks ago 1.05GB 2026-04-17 07:50:40.048275 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 2 weeks ago 997MB 2026-04-17 07:50:40.048296 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 2 weeks ago 996MB 2026-04-17 07:50:40.048314 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 2 weeks ago 1.07GB 2026-04-17 07:50:40.048331 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 2 weeks ago 1.07GB 2026-04-17 07:50:40.048349 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 2 weeks ago 1.05GB 2026-04-17 07:50:40.048368 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 2 weeks ago 1.05GB 2026-04-17 07:50:40.048385 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 2 weeks ago 1.05GB 2026-04-17 07:50:40.048403 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 2 weeks ago 996MB 2026-04-17 07:50:40.048420 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 2 weeks ago 995MB 2026-04-17 07:50:40.048438 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 2 weeks ago 995MB 2026-04-17 07:50:40.048457 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 2 weeks ago 995MB 2026-04-17 07:50:40.048475 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 2 weeks ago 994MB 2026-04-17 07:50:40.048520 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 2 weeks ago 1.12GB 2026-04-17 07:50:40.048540 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 2 weeks ago 1.79GB 2026-04-17 07:50:40.048558 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 2 weeks ago 1.43GB 2026-04-17 07:50:40.048576 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 2 weeks ago 1.43GB 2026-04-17 07:50:40.048594 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 2 weeks ago 1.44GB 2026-04-17 07:50:40.049978 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 2 weeks ago 1.24GB 2026-04-17 07:50:40.050013 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 2 weeks ago 1.07GB 2026-04-17 07:50:40.050105 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 2 weeks ago 1.02GB 2026-04-17 07:50:40.050117 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 2 weeks ago 1GB 2026-04-17 07:50:40.050128 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 2 weeks ago 1GB 2026-04-17 07:50:40.050139 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 2 weeks ago 1GB 2026-04-17 07:50:40.050151 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 2 weeks ago 1.27GB 2026-04-17 07:50:40.050162 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 2 weeks ago 1.15GB 2026-04-17 07:50:40.050172 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 2 weeks ago 1.01GB 2026-04-17 07:50:40.050183 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 2 weeks ago 1GB 2026-04-17 07:50:40.050194 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 2 weeks ago 1GB 2026-04-17 07:50:40.050205 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 2 weeks ago 1.01GB 2026-04-17 07:50:40.050215 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 2 weeks ago 1GB 2026-04-17 07:50:40.050226 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 2 weeks ago 1GB 2026-04-17 07:50:40.050237 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 2 weeks ago 1.23GB 2026-04-17 07:50:40.050247 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 2 weeks ago 1.39GB 2026-04-17 07:50:40.050258 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 2 weeks ago 1.23GB 2026-04-17 07:50:40.050269 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 2 weeks ago 1.23GB 2026-04-17 07:50:40.050291 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 2 weeks ago 1.07GB 2026-04-17 07:50:40.050303 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 2 weeks ago 1.07GB 2026-04-17 07:50:40.050313 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 2 weeks ago 1.07GB 2026-04-17 07:50:40.050328 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 2 weeks ago 1.24GB 2026-04-17 07:50:40.050339 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 2 weeks ago 301MB 2026-04-17 07:50:40.050350 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-17 07:50:40.050361 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-17 07:50:40.050372 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-17 07:50:40.050383 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-17 07:50:40.050405 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-17 07:50:40.050424 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-17 07:50:40.050443 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-17 07:50:40.050460 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-17 07:50:40.050519 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-17 07:50:40.050541 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-17 07:50:40.050553 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-17 07:50:40.050564 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-17 07:50:40.050580 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-17 07:50:40.050594 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-17 07:50:40.050606 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-17 07:50:40.050620 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-17 07:50:40.050632 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-17 07:50:40.050646 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-17 07:50:40.050659 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-17 07:50:40.050671 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-17 07:50:40.050683 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-17 07:50:40.050693 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-17 07:50:40.050704 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-17 07:50:40.050715 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-17 07:50:40.050725 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-17 07:50:40.050736 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-17 07:50:40.050747 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-17 07:50:40.050757 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-17 07:50:40.050774 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-17 07:50:40.050785 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-17 07:50:40.050805 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-17 07:50:40.050816 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-17 07:50:40.050826 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-17 07:50:40.050837 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-17 07:50:40.050848 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-17 07:50:40.050858 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-17 07:50:40.050869 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-17 07:50:40.050879 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-17 07:50:40.050897 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-17 07:50:40.050908 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-17 07:50:40.050919 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-17 07:50:40.050929 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-17 07:50:40.050940 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-17 07:50:40.050951 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-17 07:50:40.050961 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-17 07:50:40.050972 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-17 07:50:40.050983 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-17 07:50:40.050993 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-17 07:50:40.051004 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-17 07:50:40.051014 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-17 07:50:40.051025 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-17 07:50:40.051035 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-17 07:50:40.051046 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-17 07:50:40.051057 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-17 07:50:40.051067 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-17 07:50:40.051085 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-17 07:50:40.051095 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-17 07:50:40.051106 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-17 07:50:40.051116 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-17 07:50:40.051127 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-17 07:50:40.051137 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-17 07:50:40.051148 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-17 07:50:40.051159 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-17 07:50:40.051169 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-17 07:50:40.051180 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-17 07:50:40.051196 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-17 07:50:40.051207 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-17 07:50:40.051218 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-17 07:50:40.051229 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-17 07:50:40.208866 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 07:50:40.209088 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-17 07:50:40.257358 | orchestrator | 2026-04-17 07:50:40.257441 | orchestrator | ## Containers @ testbed-node-2 2026-04-17 07:50:40.257454 | orchestrator | 2026-04-17 07:50:40.257465 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 07:50:40.257475 | orchestrator | + echo 2026-04-17 07:50:40.257487 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-17 07:50:40.257535 | orchestrator | + echo 2026-04-17 07:50:40.257546 | orchestrator | + osism container testbed-node-2 ps 2026-04-17 07:50:41.811388 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 07:50:41.811491 | orchestrator | 51f97aa08ac7 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 18 seconds ago Up 16 seconds (health: starting) magnum_conductor 2026-04-17 07:50:41.811548 | orchestrator | 0eb9c72375a8 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 46 seconds ago Up 45 seconds (healthy) magnum_api 2026-04-17 07:50:41.811561 | orchestrator | 98934b96e6bf registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-17 07:50:41.811572 | orchestrator | 59aba87fc197 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-17 07:50:41.811608 | orchestrator | a6d199ba661f registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_cadvisor 2026-04-17 07:50:41.811620 | orchestrator | 82a6d42e039a registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-17 07:50:41.811631 | orchestrator | a6875a92516e registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-17 07:50:41.811642 | orchestrator | e422e59979c0 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-17 07:50:41.811653 | orchestrator | 879ad63d77cc registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-17 07:50:41.811664 | orchestrator | 8ccdfb720bfa registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-17 07:50:41.811690 | orchestrator | 83a8f1243d45 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-17 07:50:41.811702 | orchestrator | 6ea040eb4e85 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-17 07:50:41.811713 | orchestrator | bd811545c9d3 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-17 07:50:41.811723 | orchestrator | 54ab7d77a830 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-17 07:50:41.811734 | orchestrator | 172e65f9f138 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_health_manager 2026-04-17 07:50:41.811745 | orchestrator | dc4a7376a0eb registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes octavia_driver_agent 2026-04-17 07:50:41.811755 | orchestrator | 6c115a92618d registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-17 07:50:41.811783 | orchestrator | f165cf4e2b7d registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-17 07:50:41.811796 | orchestrator | 003f583761e1 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_listener 2026-04-17 07:50:41.811807 | orchestrator | 254eff6a147b registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_evaluator 2026-04-17 07:50:41.811818 | orchestrator | 938a9d7cf9c0 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-17 07:50:41.811837 | orchestrator | 85d97b4da04c registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes ceilometer_central 2026-04-17 07:50:41.811848 | orchestrator | 5a0120fc23dd registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-17 07:50:41.811859 | orchestrator | 732c57072c37 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-17 07:50:41.811870 | orchestrator | c43236ebf491 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-17 07:50:41.811881 | orchestrator | 181cd9bffb71 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-17 07:50:41.811891 | orchestrator | effb64730fb4 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-17 07:50:41.811902 | orchestrator | 445d808c8fd4 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-17 07:50:41.813654 | orchestrator | 804846a5a3d1 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-17 07:50:41.813684 | orchestrator | 6408d3229f7b registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-17 07:50:41.813696 | orchestrator | c829b29cdbe8 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-17 07:50:41.813707 | orchestrator | 467498fb80d9 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-17 07:50:41.813718 | orchestrator | 6feb1c325e76 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-17 07:50:41.813729 | orchestrator | aaf003cef764 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-17 07:50:41.813739 | orchestrator | 91c62e04bb4a registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-17 07:50:41.813750 | orchestrator | 2e5334ea3df6 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-17 07:50:41.813761 | orchestrator | 5b35455de89a registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-17 07:50:41.813772 | orchestrator | 4b2adef702e5 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-17 07:50:41.813801 | orchestrator | d476009a1070 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-17 07:50:41.813812 | orchestrator | dcf3985e0d19 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) horizon 2026-04-17 07:50:41.813823 | orchestrator | ad553bee207b registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_novncproxy 2026-04-17 07:50:41.813834 | orchestrator | 222aca848ef6 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 50 minutes (healthy) nova_conductor 2026-04-17 07:50:41.813845 | orchestrator | ade2a5c7a7a7 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-17 07:50:41.813856 | orchestrator | 2101f40e2262 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_api 2026-04-17 07:50:41.813878 | orchestrator | ea000358f85d registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_scheduler 2026-04-17 07:50:41.813889 | orchestrator | 1256904ccf19 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-17 07:50:41.813900 | orchestrator | 2a040ab5ab61 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-17 07:50:41.813919 | orchestrator | 7224ab53468c registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-17 07:50:41.813935 | orchestrator | 4d20dea86357 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-17 07:50:41.813946 | orchestrator | e0ea521d71c2 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-17 07:50:41.813957 | orchestrator | b8f0409cdc13 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-17 07:50:41.813968 | orchestrator | 015717d05664 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-2 2026-04-17 07:50:41.813978 | orchestrator | 549053e28e18 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-2 2026-04-17 07:50:41.813989 | orchestrator | 1d7cb4c538de registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-17 07:50:41.814006 | orchestrator | 0c5acff96df9 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-17 07:50:41.814073 | orchestrator | a8176cae0671 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-17 07:50:41.814087 | orchestrator | a9e85737db8b registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_nb_db 2026-04-17 07:50:41.814098 | orchestrator | ca5cac797797 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_controller 2026-04-17 07:50:41.814109 | orchestrator | 773824e7b736 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_vswitchd 2026-04-17 07:50:41.814120 | orchestrator | 11caf15e6459 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_db 2026-04-17 07:50:41.814131 | orchestrator | 01025cf93fee registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) rabbitmq 2026-04-17 07:50:41.814142 | orchestrator | d0b3640eb4a6 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 2 hours ago Up 2 hours (healthy) mariadb 2026-04-17 07:50:41.814153 | orchestrator | 7c26bad68936 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-17 07:50:41.814164 | orchestrator | 3df81509abd4 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-17 07:50:41.814175 | orchestrator | d43c08fab1af registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-17 07:50:41.814186 | orchestrator | f013362f2780 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-17 07:50:41.814204 | orchestrator | bff02e51bbd7 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-17 07:50:41.814215 | orchestrator | ee441f51451c registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-17 07:50:41.814232 | orchestrator | cf892eb6d3d2 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-17 07:50:41.814243 | orchestrator | 9293aea4e480 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-17 07:50:41.814254 | orchestrator | 8100b21138a7 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-17 07:50:41.814273 | orchestrator | b0bb12e5b614 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-17 07:50:41.814284 | orchestrator | 0635cf6217cb registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-17 07:50:41.965872 | orchestrator | 2026-04-17 07:50:41.965964 | orchestrator | ## Images @ testbed-node-2 2026-04-17 07:50:41.965979 | orchestrator | 2026-04-17 07:50:41.965991 | orchestrator | + echo 2026-04-17 07:50:41.966003 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-17 07:50:41.966070 | orchestrator | + echo 2026-04-17 07:50:41.966084 | orchestrator | + osism container testbed-node-2 images 2026-04-17 07:50:43.512992 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 07:50:43.513095 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 2 weeks ago 288MB 2026-04-17 07:50:43.513112 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 2 weeks ago 1.54GB 2026-04-17 07:50:43.513124 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 2 weeks ago 1.57GB 2026-04-17 07:50:43.513135 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 2 weeks ago 590MB 2026-04-17 07:50:43.513146 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 2 weeks ago 277MB 2026-04-17 07:50:43.513157 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 2 weeks ago 1.04GB 2026-04-17 07:50:43.513168 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 2 weeks ago 350MB 2026-04-17 07:50:43.513179 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 2 weeks ago 427MB 2026-04-17 07:50:43.513189 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 2 weeks ago 683MB 2026-04-17 07:50:43.513200 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 2 weeks ago 277MB 2026-04-17 07:50:43.513211 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 2 weeks ago 285MB 2026-04-17 07:50:43.513222 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 2 weeks ago 293MB 2026-04-17 07:50:43.513232 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 2 weeks ago 293MB 2026-04-17 07:50:43.513243 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 2 weeks ago 284MB 2026-04-17 07:50:43.513254 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 2 weeks ago 284MB 2026-04-17 07:50:43.513265 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 2 weeks ago 1.2GB 2026-04-17 07:50:43.513275 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 2 weeks ago 463MB 2026-04-17 07:50:43.513286 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 2 weeks ago 309MB 2026-04-17 07:50:43.514905 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 2 weeks ago 368MB 2026-04-17 07:50:43.514942 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 2 weeks ago 303MB 2026-04-17 07:50:43.516412 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 2 weeks ago 312MB 2026-04-17 07:50:43.516457 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 2 weeks ago 317MB 2026-04-17 07:50:43.516478 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 2 weeks ago 301MB 2026-04-17 07:50:43.516531 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 2 weeks ago 301MB 2026-04-17 07:50:43.516553 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 2 weeks ago 301MB 2026-04-17 07:50:43.516573 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 2 weeks ago 301MB 2026-04-17 07:50:43.516592 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 2 weeks ago 1.09GB 2026-04-17 07:50:43.516610 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 2 weeks ago 1.06GB 2026-04-17 07:50:43.516629 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 2 weeks ago 1.05GB 2026-04-17 07:50:43.516647 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 2 weeks ago 997MB 2026-04-17 07:50:43.516667 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 2 weeks ago 996MB 2026-04-17 07:50:43.516686 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 2 weeks ago 1.07GB 2026-04-17 07:50:43.516704 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 2 weeks ago 1.07GB 2026-04-17 07:50:43.516722 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 2 weeks ago 1.05GB 2026-04-17 07:50:43.516740 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 2 weeks ago 1.05GB 2026-04-17 07:50:43.516757 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 2 weeks ago 1.05GB 2026-04-17 07:50:43.516776 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 2 weeks ago 996MB 2026-04-17 07:50:43.516794 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 2 weeks ago 995MB 2026-04-17 07:50:43.516812 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 2 weeks ago 995MB 2026-04-17 07:50:43.516831 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 2 weeks ago 995MB 2026-04-17 07:50:43.516843 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 2 weeks ago 994MB 2026-04-17 07:50:43.516866 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 2 weeks ago 1.12GB 2026-04-17 07:50:43.516877 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 2 weeks ago 1.79GB 2026-04-17 07:50:43.516889 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 2 weeks ago 1.43GB 2026-04-17 07:50:43.516900 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 2 weeks ago 1.43GB 2026-04-17 07:50:43.516931 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 2 weeks ago 1.44GB 2026-04-17 07:50:43.516942 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 2 weeks ago 1.24GB 2026-04-17 07:50:43.516953 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 2 weeks ago 1.07GB 2026-04-17 07:50:43.516964 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 2 weeks ago 1.02GB 2026-04-17 07:50:43.516975 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 2 weeks ago 1GB 2026-04-17 07:50:43.516986 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 2 weeks ago 1GB 2026-04-17 07:50:43.517015 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 2 weeks ago 1GB 2026-04-17 07:50:43.517027 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 2 weeks ago 1.27GB 2026-04-17 07:50:43.517038 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 2 weeks ago 1.15GB 2026-04-17 07:50:43.517049 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 2 weeks ago 1.01GB 2026-04-17 07:50:43.517060 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 2 weeks ago 1GB 2026-04-17 07:50:43.517071 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 2 weeks ago 1GB 2026-04-17 07:50:43.517082 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 2 weeks ago 1.01GB 2026-04-17 07:50:43.517093 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 2 weeks ago 1GB 2026-04-17 07:50:43.517103 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 2 weeks ago 1GB 2026-04-17 07:50:43.517114 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 2 weeks ago 1.23GB 2026-04-17 07:50:43.517125 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 2 weeks ago 1.39GB 2026-04-17 07:50:43.517136 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 2 weeks ago 1.23GB 2026-04-17 07:50:43.517147 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 2 weeks ago 1.23GB 2026-04-17 07:50:43.517158 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 2 weeks ago 1.07GB 2026-04-17 07:50:43.517168 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 2 weeks ago 1.07GB 2026-04-17 07:50:43.517179 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 2 weeks ago 1.07GB 2026-04-17 07:50:43.517190 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 2 weeks ago 1.24GB 2026-04-17 07:50:43.517201 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 2 weeks ago 301MB 2026-04-17 07:50:43.517212 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-17 07:50:43.517232 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-17 07:50:43.517250 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-17 07:50:43.517278 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-17 07:50:43.517299 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-17 07:50:43.517315 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-17 07:50:43.517332 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-17 07:50:43.517347 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-17 07:50:43.517364 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-17 07:50:43.517384 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-17 07:50:43.517403 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-17 07:50:43.517421 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-17 07:50:43.517443 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-17 07:50:43.517462 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-17 07:50:43.517473 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-17 07:50:43.517484 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-17 07:50:43.517518 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-17 07:50:43.517531 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-17 07:50:43.517542 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-17 07:50:43.517553 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-17 07:50:43.517563 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-17 07:50:43.517574 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-17 07:50:43.517585 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-17 07:50:43.517596 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-17 07:50:43.517606 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-17 07:50:43.517617 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-17 07:50:43.517638 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-17 07:50:43.517649 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-17 07:50:43.517660 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-17 07:50:43.517671 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-17 07:50:43.517682 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-17 07:50:43.517692 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-17 07:50:43.517703 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-17 07:50:43.517714 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-17 07:50:43.517724 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-17 07:50:43.517735 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-17 07:50:43.517746 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-17 07:50:43.517757 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-17 07:50:43.517768 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-17 07:50:43.517779 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-17 07:50:43.517789 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-17 07:50:43.517800 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-17 07:50:43.517811 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-17 07:50:43.517827 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-17 07:50:43.517839 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-17 07:50:43.517850 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-17 07:50:43.517860 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-17 07:50:43.517871 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-17 07:50:43.517882 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-17 07:50:43.517893 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-17 07:50:43.517904 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-17 07:50:43.517921 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-17 07:50:43.517932 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-17 07:50:43.517943 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-17 07:50:43.517954 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-17 07:50:43.517965 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-17 07:50:43.517976 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-17 07:50:43.517987 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-17 07:50:43.517998 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-17 07:50:43.518009 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-17 07:50:43.518072 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-17 07:50:43.518084 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-17 07:50:43.518095 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-17 07:50:43.518106 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-17 07:50:43.518116 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-17 07:50:43.518127 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-17 07:50:43.518138 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-17 07:50:43.518149 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-17 07:50:43.518160 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-17 07:50:43.674909 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-17 07:50:43.683974 | orchestrator | + set -e 2026-04-17 07:50:43.684047 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 07:50:43.684061 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 07:50:43.684072 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 07:50:43.684083 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 07:50:43.684093 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 07:50:43.684105 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 07:50:43.684117 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 07:50:43.684144 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 07:50:43.684156 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 07:50:43.684167 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 07:50:43.684178 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 07:50:43.684188 | orchestrator | ++ export ARA=false 2026-04-17 07:50:43.684249 | orchestrator | ++ ARA=false 2026-04-17 07:50:43.684262 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 07:50:43.684273 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 07:50:43.684284 | orchestrator | ++ export TEMPEST=false 2026-04-17 07:50:43.684295 | orchestrator | ++ TEMPEST=false 2026-04-17 07:50:43.684307 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 07:50:43.684338 | orchestrator | ++ IS_ZUUL=true 2026-04-17 07:50:43.684349 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 07:50:43.684361 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 07:50:43.684372 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 07:50:43.684387 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 07:50:43.684399 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 07:50:43.684410 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 07:50:43.684421 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 07:50:43.684431 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 07:50:43.684442 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 07:50:43.684453 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 07:50:43.684464 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-17 07:50:43.684475 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-17 07:50:43.684486 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 07:50:43.684528 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-17 07:50:43.693948 | orchestrator | + set -e 2026-04-17 07:50:43.694006 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 07:50:43.694168 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 07:50:43.694182 | orchestrator | ++ INTERACTIVE=false 2026-04-17 07:50:43.694193 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 07:50:43.694204 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 07:50:43.694225 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 07:50:43.696181 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 07:50:43.702551 | orchestrator | 2026-04-17 07:50:43.702599 | orchestrator | # Ceph status 2026-04-17 07:50:43.702611 | orchestrator | 2026-04-17 07:50:43.702623 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-17 07:50:43.702634 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-17 07:50:43.702645 | orchestrator | + echo 2026-04-17 07:50:43.702656 | orchestrator | + echo '# Ceph status' 2026-04-17 07:50:43.702667 | orchestrator | + echo 2026-04-17 07:50:43.702679 | orchestrator | + ceph -s 2026-04-17 07:50:44.363419 | orchestrator | cluster: 2026-04-17 07:50:44.363544 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-17 07:50:44.363565 | orchestrator | health: HEALTH_OK 2026-04-17 07:50:44.363578 | orchestrator | 2026-04-17 07:50:44.363591 | orchestrator | services: 2026-04-17 07:50:44.363603 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 107m) 2026-04-17 07:50:44.363628 | orchestrator | mgr: testbed-node-0(active, since 102m), standbys: testbed-node-1, testbed-node-2 2026-04-17 07:50:44.363642 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-17 07:50:44.363654 | orchestrator | osd: 6 osds: 6 up (since 94m), 6 in (since 3h) 2026-04-17 07:50:44.363665 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-17 07:50:44.363678 | orchestrator | 2026-04-17 07:50:44.363690 | orchestrator | data: 2026-04-17 07:50:44.363702 | orchestrator | volumes: 1/1 healthy 2026-04-17 07:50:44.363713 | orchestrator | pools: 14 pools, 401 pgs 2026-04-17 07:50:44.363725 | orchestrator | objects: 819 objects, 2.8 GiB 2026-04-17 07:50:44.363738 | orchestrator | usage: 7.9 GiB used, 112 GiB / 120 GiB avail 2026-04-17 07:50:44.363751 | orchestrator | pgs: 401 active+clean 2026-04-17 07:50:44.363763 | orchestrator | 2026-04-17 07:50:44.363774 | orchestrator | io: 2026-04-17 07:50:44.363787 | orchestrator | client: 1.2 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-04-17 07:50:44.363799 | orchestrator | 2026-04-17 07:50:44.404399 | orchestrator | 2026-04-17 07:50:44.404475 | orchestrator | # Ceph versions 2026-04-17 07:50:44.404483 | orchestrator | 2026-04-17 07:50:44.404489 | orchestrator | + echo 2026-04-17 07:50:44.404521 | orchestrator | + echo '# Ceph versions' 2026-04-17 07:50:44.404531 | orchestrator | + echo 2026-04-17 07:50:44.404537 | orchestrator | + ceph versions 2026-04-17 07:50:44.960166 | orchestrator | { 2026-04-17 07:50:44.960318 | orchestrator | "mon": { 2026-04-17 07:50:44.960337 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-17 07:50:44.960351 | orchestrator | }, 2026-04-17 07:50:44.960362 | orchestrator | "mgr": { 2026-04-17 07:50:44.960373 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-17 07:50:44.961138 | orchestrator | }, 2026-04-17 07:50:44.961160 | orchestrator | "osd": { 2026-04-17 07:50:44.961172 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-17 07:50:44.961183 | orchestrator | }, 2026-04-17 07:50:44.961194 | orchestrator | "mds": { 2026-04-17 07:50:44.961205 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-17 07:50:44.961245 | orchestrator | }, 2026-04-17 07:50:44.961256 | orchestrator | "rgw": { 2026-04-17 07:50:44.961267 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-17 07:50:44.961278 | orchestrator | }, 2026-04-17 07:50:44.961288 | orchestrator | "overall": { 2026-04-17 07:50:44.961300 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-17 07:50:44.961311 | orchestrator | } 2026-04-17 07:50:44.961322 | orchestrator | } 2026-04-17 07:50:45.012357 | orchestrator | 2026-04-17 07:50:45.012457 | orchestrator | # Ceph OSD tree 2026-04-17 07:50:45.012480 | orchestrator | 2026-04-17 07:50:45.012554 | orchestrator | + echo 2026-04-17 07:50:45.012578 | orchestrator | + echo '# Ceph OSD tree' 2026-04-17 07:50:45.012599 | orchestrator | + echo 2026-04-17 07:50:45.012619 | orchestrator | + ceph osd df tree 2026-04-17 07:50:45.522468 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-17 07:50:45.522615 | orchestrator | -1 0.11691 - 120 GiB 7.9 GiB 7.6 GiB 44 KiB 325 MiB 112 GiB 6.63 1.00 - root default 2026-04-17 07:50:45.522632 | orchestrator | -5 0.03897 - 40 GiB 2.7 GiB 2.5 GiB 14 KiB 112 MiB 37 GiB 6.64 1.00 - host testbed-node-3 2026-04-17 07:50:45.522644 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 6 KiB 54 MiB 19 GiB 6.42 0.97 190 up osd.0 2026-04-17 07:50:45.522655 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 8 KiB 58 MiB 19 GiB 6.86 1.03 202 up osd.4 2026-04-17 07:50:45.522666 | orchestrator | -3 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 96 MiB 37 GiB 6.60 1.00 - host testbed-node-4 2026-04-17 07:50:45.522677 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 8 KiB 46 MiB 19 GiB 6.16 0.93 195 up osd.2 2026-04-17 07:50:45.522688 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 7 KiB 50 MiB 19 GiB 7.04 1.06 195 up osd.5 2026-04-17 07:50:45.522698 | orchestrator | -7 0.03897 - 40 GiB 2.7 GiB 2.5 GiB 15 KiB 116 MiB 37 GiB 6.65 1.00 - host testbed-node-5 2026-04-17 07:50:45.522709 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 8 KiB 58 MiB 19 GiB 6.69 1.01 184 up osd.1 2026-04-17 07:50:45.522720 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 7 KiB 58 MiB 19 GiB 6.61 1.00 204 up osd.3 2026-04-17 07:50:45.522731 | orchestrator | TOTAL 120 GiB 7.9 GiB 7.6 GiB 48 KiB 325 MiB 112 GiB 6.63 2026-04-17 07:50:45.522743 | orchestrator | MIN/MAX VAR: 0.93/1.06 STDDEV: 0.29 2026-04-17 07:50:45.569720 | orchestrator | 2026-04-17 07:50:45.569830 | orchestrator | # Ceph monitor status 2026-04-17 07:50:45.569854 | orchestrator | 2026-04-17 07:50:45.569874 | orchestrator | + echo 2026-04-17 07:50:45.569887 | orchestrator | + echo '# Ceph monitor status' 2026-04-17 07:50:45.569897 | orchestrator | + echo 2026-04-17 07:50:45.569908 | orchestrator | + ceph mon stat 2026-04-17 07:50:46.151095 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 34, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-17 07:50:46.213823 | orchestrator | 2026-04-17 07:50:46.214119 | orchestrator | # Ceph quorum status 2026-04-17 07:50:46.214141 | orchestrator | 2026-04-17 07:50:46.214153 | orchestrator | + echo 2026-04-17 07:50:46.214165 | orchestrator | + echo '# Ceph quorum status' 2026-04-17 07:50:46.214176 | orchestrator | + echo 2026-04-17 07:50:46.214200 | orchestrator | + ceph quorum_status 2026-04-17 07:50:46.218106 | orchestrator | + jq 2026-04-17 07:50:46.841045 | orchestrator | { 2026-04-17 07:50:46.841140 | orchestrator | "election_epoch": 34, 2026-04-17 07:50:46.841156 | orchestrator | "quorum": [ 2026-04-17 07:50:46.841167 | orchestrator | 0, 2026-04-17 07:50:46.841179 | orchestrator | 1, 2026-04-17 07:50:46.841189 | orchestrator | 2 2026-04-17 07:50:46.841200 | orchestrator | ], 2026-04-17 07:50:46.841236 | orchestrator | "quorum_names": [ 2026-04-17 07:50:46.841248 | orchestrator | "testbed-node-0", 2026-04-17 07:50:46.841259 | orchestrator | "testbed-node-1", 2026-04-17 07:50:46.841269 | orchestrator | "testbed-node-2" 2026-04-17 07:50:46.841280 | orchestrator | ], 2026-04-17 07:50:46.841291 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-17 07:50:46.841302 | orchestrator | "quorum_age": 6440, 2026-04-17 07:50:46.841313 | orchestrator | "features": { 2026-04-17 07:50:46.841324 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-17 07:50:46.841335 | orchestrator | "quorum_mon": [ 2026-04-17 07:50:46.841345 | orchestrator | "kraken", 2026-04-17 07:50:46.841356 | orchestrator | "luminous", 2026-04-17 07:50:46.841367 | orchestrator | "mimic", 2026-04-17 07:50:46.841378 | orchestrator | "osdmap-prune", 2026-04-17 07:50:46.841388 | orchestrator | "nautilus", 2026-04-17 07:50:46.841398 | orchestrator | "octopus", 2026-04-17 07:50:46.841409 | orchestrator | "pacific", 2026-04-17 07:50:46.841419 | orchestrator | "elector-pinging", 2026-04-17 07:50:46.841430 | orchestrator | "quincy", 2026-04-17 07:50:46.841440 | orchestrator | "reef" 2026-04-17 07:50:46.841451 | orchestrator | ] 2026-04-17 07:50:46.841462 | orchestrator | }, 2026-04-17 07:50:46.841472 | orchestrator | "monmap": { 2026-04-17 07:50:46.841483 | orchestrator | "epoch": 1, 2026-04-17 07:50:46.841494 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-17 07:50:46.841541 | orchestrator | "modified": "2026-04-17T03:51:24.672818Z", 2026-04-17 07:50:46.841552 | orchestrator | "created": "2026-04-17T03:51:24.672818Z", 2026-04-17 07:50:46.841563 | orchestrator | "min_mon_release": 18, 2026-04-17 07:50:46.841573 | orchestrator | "min_mon_release_name": "reef", 2026-04-17 07:50:46.841584 | orchestrator | "election_strategy": 1, 2026-04-17 07:50:46.841595 | orchestrator | "disallowed_leaders: ": "", 2026-04-17 07:50:46.841605 | orchestrator | "stretch_mode": false, 2026-04-17 07:50:46.841616 | orchestrator | "tiebreaker_mon": "", 2026-04-17 07:50:46.841626 | orchestrator | "removed_ranks: ": "", 2026-04-17 07:50:46.841637 | orchestrator | "features": { 2026-04-17 07:50:46.841648 | orchestrator | "persistent": [ 2026-04-17 07:50:46.841658 | orchestrator | "kraken", 2026-04-17 07:50:46.841668 | orchestrator | "luminous", 2026-04-17 07:50:46.841679 | orchestrator | "mimic", 2026-04-17 07:50:46.841689 | orchestrator | "osdmap-prune", 2026-04-17 07:50:46.841700 | orchestrator | "nautilus", 2026-04-17 07:50:46.841710 | orchestrator | "octopus", 2026-04-17 07:50:46.841721 | orchestrator | "pacific", 2026-04-17 07:50:46.841731 | orchestrator | "elector-pinging", 2026-04-17 07:50:46.841742 | orchestrator | "quincy", 2026-04-17 07:50:46.841754 | orchestrator | "reef" 2026-04-17 07:50:46.841764 | orchestrator | ], 2026-04-17 07:50:46.841775 | orchestrator | "optional": [] 2026-04-17 07:50:46.841786 | orchestrator | }, 2026-04-17 07:50:46.841796 | orchestrator | "mons": [ 2026-04-17 07:50:46.841807 | orchestrator | { 2026-04-17 07:50:46.841818 | orchestrator | "rank": 0, 2026-04-17 07:50:46.841828 | orchestrator | "name": "testbed-node-0", 2026-04-17 07:50:46.841839 | orchestrator | "public_addrs": { 2026-04-17 07:50:46.841849 | orchestrator | "addrvec": [ 2026-04-17 07:50:46.841860 | orchestrator | { 2026-04-17 07:50:46.841870 | orchestrator | "type": "v2", 2026-04-17 07:50:46.841881 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-17 07:50:46.841892 | orchestrator | "nonce": 0 2026-04-17 07:50:46.841902 | orchestrator | }, 2026-04-17 07:50:46.841913 | orchestrator | { 2026-04-17 07:50:46.841923 | orchestrator | "type": "v1", 2026-04-17 07:50:46.841934 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-17 07:50:46.841944 | orchestrator | "nonce": 0 2026-04-17 07:50:46.841955 | orchestrator | } 2026-04-17 07:50:46.841965 | orchestrator | ] 2026-04-17 07:50:46.841976 | orchestrator | }, 2026-04-17 07:50:46.841987 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-17 07:50:46.841997 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-17 07:50:46.842008 | orchestrator | "priority": 0, 2026-04-17 07:50:46.842068 | orchestrator | "weight": 0, 2026-04-17 07:50:46.842080 | orchestrator | "crush_location": "{}" 2026-04-17 07:50:46.842091 | orchestrator | }, 2026-04-17 07:50:46.842102 | orchestrator | { 2026-04-17 07:50:46.842112 | orchestrator | "rank": 1, 2026-04-17 07:50:46.842123 | orchestrator | "name": "testbed-node-1", 2026-04-17 07:50:46.842135 | orchestrator | "public_addrs": { 2026-04-17 07:50:46.842145 | orchestrator | "addrvec": [ 2026-04-17 07:50:46.842160 | orchestrator | { 2026-04-17 07:50:46.842178 | orchestrator | "type": "v2", 2026-04-17 07:50:46.842207 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-17 07:50:46.842231 | orchestrator | "nonce": 0 2026-04-17 07:50:46.842250 | orchestrator | }, 2026-04-17 07:50:46.842269 | orchestrator | { 2026-04-17 07:50:46.842285 | orchestrator | "type": "v1", 2026-04-17 07:50:46.842296 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-17 07:50:46.842306 | orchestrator | "nonce": 0 2026-04-17 07:50:46.842317 | orchestrator | } 2026-04-17 07:50:46.842328 | orchestrator | ] 2026-04-17 07:50:46.842338 | orchestrator | }, 2026-04-17 07:50:46.842349 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-17 07:50:46.842359 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-17 07:50:46.842370 | orchestrator | "priority": 0, 2026-04-17 07:50:46.842381 | orchestrator | "weight": 0, 2026-04-17 07:50:46.842391 | orchestrator | "crush_location": "{}" 2026-04-17 07:50:46.842402 | orchestrator | }, 2026-04-17 07:50:46.842412 | orchestrator | { 2026-04-17 07:50:46.842423 | orchestrator | "rank": 2, 2026-04-17 07:50:46.842434 | orchestrator | "name": "testbed-node-2", 2026-04-17 07:50:46.842444 | orchestrator | "public_addrs": { 2026-04-17 07:50:46.842455 | orchestrator | "addrvec": [ 2026-04-17 07:50:46.842465 | orchestrator | { 2026-04-17 07:50:46.842476 | orchestrator | "type": "v2", 2026-04-17 07:50:46.842486 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-17 07:50:46.842497 | orchestrator | "nonce": 0 2026-04-17 07:50:46.842530 | orchestrator | }, 2026-04-17 07:50:46.842541 | orchestrator | { 2026-04-17 07:50:46.842552 | orchestrator | "type": "v1", 2026-04-17 07:50:46.842562 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-17 07:50:46.842573 | orchestrator | "nonce": 0 2026-04-17 07:50:46.842584 | orchestrator | } 2026-04-17 07:50:46.842594 | orchestrator | ] 2026-04-17 07:50:46.842605 | orchestrator | }, 2026-04-17 07:50:46.842615 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-17 07:50:46.842626 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-17 07:50:46.842637 | orchestrator | "priority": 0, 2026-04-17 07:50:46.842647 | orchestrator | "weight": 0, 2026-04-17 07:50:46.842658 | orchestrator | "crush_location": "{}" 2026-04-17 07:50:46.842668 | orchestrator | } 2026-04-17 07:50:46.842679 | orchestrator | ] 2026-04-17 07:50:46.842689 | orchestrator | } 2026-04-17 07:50:46.842700 | orchestrator | } 2026-04-17 07:50:46.842711 | orchestrator | 2026-04-17 07:50:46.842722 | orchestrator | # Ceph free space status 2026-04-17 07:50:46.842733 | orchestrator | 2026-04-17 07:50:46.842743 | orchestrator | + echo 2026-04-17 07:50:46.842754 | orchestrator | + echo '# Ceph free space status' 2026-04-17 07:50:46.842765 | orchestrator | + echo 2026-04-17 07:50:46.842776 | orchestrator | + ceph df 2026-04-17 07:50:47.421092 | orchestrator | --- RAW STORAGE --- 2026-04-17 07:50:47.421171 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-17 07:50:47.421192 | orchestrator | hdd 120 GiB 112 GiB 7.9 GiB 7.9 GiB 6.63 2026-04-17 07:50:47.421199 | orchestrator | TOTAL 120 GiB 112 GiB 7.9 GiB 7.9 GiB 6.63 2026-04-17 07:50:47.421206 | orchestrator | 2026-04-17 07:50:47.421213 | orchestrator | --- POOLS --- 2026-04-17 07:50:47.421220 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-17 07:50:47.421228 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-17 07:50:47.421235 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-17 07:50:47.421242 | orchestrator | cephfs_metadata 3 16 10 KiB 22 117 KiB 0 35 GiB 2026-04-17 07:50:47.421248 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-17 07:50:47.421255 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-17 07:50:47.421261 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-17 07:50:47.421268 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-17 07:50:47.421284 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-17 07:50:47.421291 | orchestrator | .rgw.root 9 32 2.6 KiB 6 48 KiB 0 53 GiB 2026-04-17 07:50:47.421298 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 07:50:47.421304 | orchestrator | volumes 11 32 325 MiB 267 974 MiB 0.89 35 GiB 2026-04-17 07:50:47.421327 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-04-17 07:50:47.421333 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 07:50:47.421340 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 07:50:47.479797 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-17 07:50:47.536548 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-17 07:50:47.536620 | orchestrator | + osism apply facts 2026-04-17 07:50:48.845194 | orchestrator | 2026-04-17 07:50:48 | INFO  | Prepare task for execution of facts. 2026-04-17 07:50:48.912688 | orchestrator | 2026-04-17 07:50:48 | INFO  | Task 4de7e2f9-f38e-4bdd-b5f7-00a48b444ae9 (facts) was prepared for execution. 2026-04-17 07:50:48.912802 | orchestrator | 2026-04-17 07:50:48 | INFO  | It takes a moment until task 4de7e2f9-f38e-4bdd-b5f7-00a48b444ae9 (facts) has been started and output is visible here. 2026-04-17 07:51:07.337195 | orchestrator | 2026-04-17 07:51:07.337285 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-17 07:51:07.337298 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 07:51:07.337308 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 07:51:07.337326 | orchestrator | 2026-04-17 07:51:07.337334 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 07:51:07.337342 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 07:51:07.337350 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 07:51:07.337367 | orchestrator | Friday 17 April 2026 07:50:53 +0000 (0:00:01.602) 0:00:01.602 ********** 2026-04-17 07:51:07.337375 | orchestrator | ok: [testbed-manager] 2026-04-17 07:51:07.337384 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:51:07.337392 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:51:07.337400 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:51:07.337408 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:51:07.337416 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:51:07.337423 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:51:07.337431 | orchestrator | 2026-04-17 07:51:07.337439 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 07:51:07.337448 | orchestrator | Friday 17 April 2026 07:50:55 +0000 (0:00:01.837) 0:00:03.440 ********** 2026-04-17 07:51:07.337455 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:51:07.337463 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:51:07.337471 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:51:07.337479 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:51:07.337487 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:51:07.337495 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:51:07.337503 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:51:07.337510 | orchestrator | 2026-04-17 07:51:07.337518 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 07:51:07.337568 | orchestrator | 2026-04-17 07:51:07.337576 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 07:51:07.337584 | orchestrator | Friday 17 April 2026 07:50:58 +0000 (0:00:02.190) 0:00:05.630 ********** 2026-04-17 07:51:07.337591 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:51:07.337599 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:51:07.337607 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:51:07.337615 | orchestrator | ok: [testbed-manager] 2026-04-17 07:51:07.337623 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:51:07.337631 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:51:07.337638 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:51:07.337646 | orchestrator | 2026-04-17 07:51:07.337654 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 07:51:07.337685 | orchestrator | 2026-04-17 07:51:07.337693 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 07:51:07.337701 | orchestrator | Friday 17 April 2026 07:51:05 +0000 (0:00:07.209) 0:00:12.840 ********** 2026-04-17 07:51:07.337709 | orchestrator | skipping: [testbed-manager] 2026-04-17 07:51:07.337717 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:51:07.337725 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:51:07.337732 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:51:07.337740 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:51:07.337748 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:51:07.337757 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:51:07.337765 | orchestrator | 2026-04-17 07:51:07.337774 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:51:07.337783 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:51:07.337793 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:51:07.337803 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:51:07.337812 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:51:07.337821 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:51:07.337830 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:51:07.337839 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:51:07.337847 | orchestrator | 2026-04-17 07:51:07.337855 | orchestrator | 2026-04-17 07:51:07.337863 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:51:07.337871 | orchestrator | Friday 17 April 2026 07:51:06 +0000 (0:00:01.696) 0:00:14.537 ********** 2026-04-17 07:51:07.337879 | orchestrator | =============================================================================== 2026-04-17 07:51:07.337887 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.21s 2026-04-17 07:51:07.337895 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.19s 2026-04-17 07:51:07.337902 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.84s 2026-04-17 07:51:07.337910 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.70s 2026-04-17 07:51:07.537846 | orchestrator | + osism validate ceph-mons 2026-04-17 07:52:18.486934 | orchestrator | 2026-04-17 07:52:18.487052 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-17 07:52:18.487072 | orchestrator | 2026-04-17 07:52:18.487086 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 07:52:18.487099 | orchestrator | Friday 17 April 2026 07:51:24 +0000 (0:00:01.981) 0:00:01.981 ********** 2026-04-17 07:52:18.487113 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:52:18.487126 | orchestrator | 2026-04-17 07:52:18.487139 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 07:52:18.487152 | orchestrator | Friday 17 April 2026 07:51:27 +0000 (0:00:02.899) 0:00:04.881 ********** 2026-04-17 07:52:18.487164 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:52:18.487176 | orchestrator | 2026-04-17 07:52:18.487189 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 07:52:18.487202 | orchestrator | Friday 17 April 2026 07:51:29 +0000 (0:00:01.782) 0:00:06.663 ********** 2026-04-17 07:52:18.487241 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.487256 | orchestrator | 2026-04-17 07:52:18.487269 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-17 07:52:18.487281 | orchestrator | Friday 17 April 2026 07:51:30 +0000 (0:00:01.140) 0:00:07.803 ********** 2026-04-17 07:52:18.487293 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.487306 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:52:18.487317 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:52:18.487330 | orchestrator | 2026-04-17 07:52:18.487342 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-17 07:52:18.487354 | orchestrator | Friday 17 April 2026 07:51:32 +0000 (0:00:01.735) 0:00:09.539 ********** 2026-04-17 07:52:18.487367 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.487379 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:52:18.487392 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:52:18.487404 | orchestrator | 2026-04-17 07:52:18.487416 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-17 07:52:18.487428 | orchestrator | Friday 17 April 2026 07:51:34 +0000 (0:00:02.602) 0:00:12.142 ********** 2026-04-17 07:52:18.487459 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.487474 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:52:18.487487 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:52:18.487501 | orchestrator | 2026-04-17 07:52:18.487514 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-17 07:52:18.487527 | orchestrator | Friday 17 April 2026 07:51:36 +0000 (0:00:01.451) 0:00:13.593 ********** 2026-04-17 07:52:18.487540 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.487554 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:52:18.487567 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:52:18.487580 | orchestrator | 2026-04-17 07:52:18.487631 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 07:52:18.487643 | orchestrator | Friday 17 April 2026 07:51:37 +0000 (0:00:01.357) 0:00:14.951 ********** 2026-04-17 07:52:18.487654 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.487667 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:52:18.487679 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:52:18.487692 | orchestrator | 2026-04-17 07:52:18.487704 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-17 07:52:18.487715 | orchestrator | Friday 17 April 2026 07:51:38 +0000 (0:00:01.401) 0:00:16.352 ********** 2026-04-17 07:52:18.487727 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.487739 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:52:18.487750 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:52:18.487762 | orchestrator | 2026-04-17 07:52:18.487773 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-17 07:52:18.487785 | orchestrator | Friday 17 April 2026 07:51:40 +0000 (0:00:01.344) 0:00:17.697 ********** 2026-04-17 07:52:18.487796 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.487807 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:52:18.487819 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:52:18.487830 | orchestrator | 2026-04-17 07:52:18.487840 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 07:52:18.487851 | orchestrator | Friday 17 April 2026 07:51:41 +0000 (0:00:01.381) 0:00:19.078 ********** 2026-04-17 07:52:18.487862 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.487874 | orchestrator | 2026-04-17 07:52:18.487886 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 07:52:18.487896 | orchestrator | Friday 17 April 2026 07:51:43 +0000 (0:00:01.278) 0:00:20.356 ********** 2026-04-17 07:52:18.487907 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.487919 | orchestrator | 2026-04-17 07:52:18.487930 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 07:52:18.487950 | orchestrator | Friday 17 April 2026 07:51:44 +0000 (0:00:01.259) 0:00:21.616 ********** 2026-04-17 07:52:18.487975 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.487983 | orchestrator | 2026-04-17 07:52:18.487990 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:52:18.487997 | orchestrator | Friday 17 April 2026 07:51:45 +0000 (0:00:01.259) 0:00:22.876 ********** 2026-04-17 07:52:18.488004 | orchestrator | 2026-04-17 07:52:18.488010 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:52:18.488017 | orchestrator | Friday 17 April 2026 07:51:45 +0000 (0:00:00.468) 0:00:23.344 ********** 2026-04-17 07:52:18.488023 | orchestrator | 2026-04-17 07:52:18.488030 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:52:18.488037 | orchestrator | Friday 17 April 2026 07:51:46 +0000 (0:00:00.654) 0:00:23.998 ********** 2026-04-17 07:52:18.488044 | orchestrator | 2026-04-17 07:52:18.488055 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 07:52:18.488066 | orchestrator | Friday 17 April 2026 07:51:47 +0000 (0:00:00.871) 0:00:24.869 ********** 2026-04-17 07:52:18.488077 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.488088 | orchestrator | 2026-04-17 07:52:18.488099 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-17 07:52:18.488110 | orchestrator | Friday 17 April 2026 07:51:48 +0000 (0:00:01.272) 0:00:26.142 ********** 2026-04-17 07:52:18.488120 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.488130 | orchestrator | 2026-04-17 07:52:18.488164 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-17 07:52:18.488177 | orchestrator | Friday 17 April 2026 07:51:50 +0000 (0:00:01.266) 0:00:27.408 ********** 2026-04-17 07:52:18.488188 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.488198 | orchestrator | 2026-04-17 07:52:18.488209 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-17 07:52:18.488221 | orchestrator | Friday 17 April 2026 07:51:51 +0000 (0:00:01.137) 0:00:28.546 ********** 2026-04-17 07:52:18.488232 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:52:18.488244 | orchestrator | 2026-04-17 07:52:18.488255 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-17 07:52:18.488266 | orchestrator | Friday 17 April 2026 07:51:53 +0000 (0:00:02.682) 0:00:31.228 ********** 2026-04-17 07:52:18.488278 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.488290 | orchestrator | 2026-04-17 07:52:18.488301 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-17 07:52:18.488312 | orchestrator | Friday 17 April 2026 07:51:55 +0000 (0:00:01.406) 0:00:32.635 ********** 2026-04-17 07:52:18.488324 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.488337 | orchestrator | 2026-04-17 07:52:18.488348 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-17 07:52:18.488360 | orchestrator | Friday 17 April 2026 07:51:56 +0000 (0:00:01.163) 0:00:33.799 ********** 2026-04-17 07:52:18.488371 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.488383 | orchestrator | 2026-04-17 07:52:18.488394 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-17 07:52:18.488405 | orchestrator | Friday 17 April 2026 07:51:57 +0000 (0:00:01.370) 0:00:35.170 ********** 2026-04-17 07:52:18.488416 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.488427 | orchestrator | 2026-04-17 07:52:18.488438 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-17 07:52:18.488450 | orchestrator | Friday 17 April 2026 07:51:59 +0000 (0:00:01.376) 0:00:36.547 ********** 2026-04-17 07:52:18.488462 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.488474 | orchestrator | 2026-04-17 07:52:18.488486 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-17 07:52:18.488497 | orchestrator | Friday 17 April 2026 07:52:00 +0000 (0:00:01.085) 0:00:37.633 ********** 2026-04-17 07:52:18.488508 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.488519 | orchestrator | 2026-04-17 07:52:18.488530 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-17 07:52:18.488550 | orchestrator | Friday 17 April 2026 07:52:01 +0000 (0:00:01.121) 0:00:38.754 ********** 2026-04-17 07:52:18.488562 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.488573 | orchestrator | 2026-04-17 07:52:18.488645 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-17 07:52:18.488661 | orchestrator | Friday 17 April 2026 07:52:02 +0000 (0:00:01.111) 0:00:39.866 ********** 2026-04-17 07:52:18.488673 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:52:18.488685 | orchestrator | 2026-04-17 07:52:18.488696 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-17 07:52:18.488707 | orchestrator | Friday 17 April 2026 07:52:04 +0000 (0:00:02.324) 0:00:42.190 ********** 2026-04-17 07:52:18.488718 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.488729 | orchestrator | 2026-04-17 07:52:18.488741 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-17 07:52:18.488753 | orchestrator | Friday 17 April 2026 07:52:06 +0000 (0:00:01.288) 0:00:43.478 ********** 2026-04-17 07:52:18.488764 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.488775 | orchestrator | 2026-04-17 07:52:18.488786 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-17 07:52:18.488797 | orchestrator | Friday 17 April 2026 07:52:07 +0000 (0:00:01.224) 0:00:44.702 ********** 2026-04-17 07:52:18.488809 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:52:18.488820 | orchestrator | 2026-04-17 07:52:18.488831 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-17 07:52:18.488843 | orchestrator | Friday 17 April 2026 07:52:08 +0000 (0:00:01.175) 0:00:45.877 ********** 2026-04-17 07:52:18.488855 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.488865 | orchestrator | 2026-04-17 07:52:18.488876 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-17 07:52:18.488888 | orchestrator | Friday 17 April 2026 07:52:09 +0000 (0:00:01.123) 0:00:47.001 ********** 2026-04-17 07:52:18.488899 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.488908 | orchestrator | 2026-04-17 07:52:18.488918 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 07:52:18.488930 | orchestrator | Friday 17 April 2026 07:52:10 +0000 (0:00:01.156) 0:00:48.158 ********** 2026-04-17 07:52:18.488949 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:52:18.488960 | orchestrator | 2026-04-17 07:52:18.488971 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 07:52:18.488982 | orchestrator | Friday 17 April 2026 07:52:12 +0000 (0:00:01.265) 0:00:49.423 ********** 2026-04-17 07:52:18.488994 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:52:18.489004 | orchestrator | 2026-04-17 07:52:18.489016 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 07:52:18.489028 | orchestrator | Friday 17 April 2026 07:52:13 +0000 (0:00:01.243) 0:00:50.667 ********** 2026-04-17 07:52:18.489039 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:52:18.489050 | orchestrator | 2026-04-17 07:52:18.489062 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 07:52:18.489073 | orchestrator | Friday 17 April 2026 07:52:16 +0000 (0:00:02.917) 0:00:53.584 ********** 2026-04-17 07:52:18.489084 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:52:18.489151 | orchestrator | 2026-04-17 07:52:18.489162 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 07:52:18.489175 | orchestrator | Friday 17 April 2026 07:52:17 +0000 (0:00:01.507) 0:00:55.091 ********** 2026-04-17 07:52:18.489187 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:52:18.489199 | orchestrator | 2026-04-17 07:52:18.489221 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:52:25.762317 | orchestrator | Friday 17 April 2026 07:52:19 +0000 (0:00:01.507) 0:00:56.599 ********** 2026-04-17 07:52:25.762419 | orchestrator | 2026-04-17 07:52:25.762434 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:52:25.762469 | orchestrator | Friday 17 April 2026 07:52:19 +0000 (0:00:00.435) 0:00:57.035 ********** 2026-04-17 07:52:25.762480 | orchestrator | 2026-04-17 07:52:25.762490 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:52:25.762499 | orchestrator | Friday 17 April 2026 07:52:20 +0000 (0:00:00.453) 0:00:57.488 ********** 2026-04-17 07:52:25.762509 | orchestrator | 2026-04-17 07:52:25.762518 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 07:52:25.762528 | orchestrator | Friday 17 April 2026 07:52:20 +0000 (0:00:00.817) 0:00:58.306 ********** 2026-04-17 07:52:25.762538 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:52:25.762547 | orchestrator | 2026-04-17 07:52:25.762557 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 07:52:25.762567 | orchestrator | Friday 17 April 2026 07:52:23 +0000 (0:00:02.322) 0:01:00.629 ********** 2026-04-17 07:52:25.762576 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-17 07:52:25.762586 | orchestrator |  "msg": [ 2026-04-17 07:52:25.762673 | orchestrator |  "Validator run completed.", 2026-04-17 07:52:25.762685 | orchestrator |  "You can find the report file here:", 2026-04-17 07:52:25.762696 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-17T07:51:25+00:00-report.json", 2026-04-17 07:52:25.762707 | orchestrator |  "on the following host:", 2026-04-17 07:52:25.762717 | orchestrator |  "testbed-manager" 2026-04-17 07:52:25.762740 | orchestrator |  ] 2026-04-17 07:52:25.762750 | orchestrator | } 2026-04-17 07:52:25.762760 | orchestrator | 2026-04-17 07:52:25.762770 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:52:25.762781 | orchestrator | testbed-node-0 : ok=24  changed=4  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-17 07:52:25.762792 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:52:25.762803 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:52:25.762812 | orchestrator | 2026-04-17 07:52:25.762822 | orchestrator | 2026-04-17 07:52:25.762832 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:52:25.762841 | orchestrator | Friday 17 April 2026 07:52:25 +0000 (0:00:02.017) 0:01:02.646 ********** 2026-04-17 07:52:25.762851 | orchestrator | =============================================================================== 2026-04-17 07:52:25.762860 | orchestrator | Aggregate test results step one ----------------------------------------- 2.92s 2026-04-17 07:52:25.762871 | orchestrator | Get timestamp for report file ------------------------------------------- 2.90s 2026-04-17 07:52:25.762883 | orchestrator | Get monmap info from one mon container ---------------------------------- 2.68s 2026-04-17 07:52:25.762894 | orchestrator | Get container info ------------------------------------------------------ 2.60s 2026-04-17 07:52:25.762905 | orchestrator | Gather status data ------------------------------------------------------ 2.32s 2026-04-17 07:52:25.762916 | orchestrator | Write report file ------------------------------------------------------- 2.32s 2026-04-17 07:52:25.762927 | orchestrator | Print report file information ------------------------------------------- 2.02s 2026-04-17 07:52:25.762938 | orchestrator | Flush handlers ---------------------------------------------------------- 1.99s 2026-04-17 07:52:25.762949 | orchestrator | Create report output directory ------------------------------------------ 1.78s 2026-04-17 07:52:25.762960 | orchestrator | Prepare test data for container existance test -------------------------- 1.74s 2026-04-17 07:52:25.762971 | orchestrator | Flush handlers ---------------------------------------------------------- 1.71s 2026-04-17 07:52:25.762982 | orchestrator | Aggregate test results step three --------------------------------------- 1.51s 2026-04-17 07:52:25.762994 | orchestrator | Aggregate test results step two ----------------------------------------- 1.51s 2026-04-17 07:52:25.763026 | orchestrator | Set test result to failed if container is missing ----------------------- 1.45s 2026-04-17 07:52:25.763037 | orchestrator | Set quorum test data ---------------------------------------------------- 1.41s 2026-04-17 07:52:25.763049 | orchestrator | Prepare test data ------------------------------------------------------- 1.40s 2026-04-17 07:52:25.763060 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 1.38s 2026-04-17 07:52:25.763070 | orchestrator | Set fsid test vars ------------------------------------------------------ 1.38s 2026-04-17 07:52:25.763082 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 1.37s 2026-04-17 07:52:25.763093 | orchestrator | Set test result to passed if container is existing ---------------------- 1.36s 2026-04-17 07:52:25.992353 | orchestrator | + osism validate ceph-mgrs 2026-04-17 07:53:30.070341 | orchestrator | 2026-04-17 07:53:30.070459 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-17 07:53:30.070476 | orchestrator | 2026-04-17 07:53:30.070488 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 07:53:30.070499 | orchestrator | Friday 17 April 2026 07:52:43 +0000 (0:00:02.225) 0:00:02.225 ********** 2026-04-17 07:53:30.070512 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:30.070523 | orchestrator | 2026-04-17 07:53:30.070534 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 07:53:30.070546 | orchestrator | Friday 17 April 2026 07:52:45 +0000 (0:00:02.705) 0:00:04.931 ********** 2026-04-17 07:53:30.070557 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:30.070568 | orchestrator | 2026-04-17 07:53:30.070580 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 07:53:30.070591 | orchestrator | Friday 17 April 2026 07:52:47 +0000 (0:00:01.591) 0:00:06.522 ********** 2026-04-17 07:53:30.070602 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.070676 | orchestrator | 2026-04-17 07:53:30.070692 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-17 07:53:30.070703 | orchestrator | Friday 17 April 2026 07:52:48 +0000 (0:00:01.106) 0:00:07.629 ********** 2026-04-17 07:53:30.070714 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.070725 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:53:30.070736 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:53:30.070747 | orchestrator | 2026-04-17 07:53:30.070758 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-17 07:53:30.070769 | orchestrator | Friday 17 April 2026 07:52:50 +0000 (0:00:01.813) 0:00:09.442 ********** 2026-04-17 07:53:30.070780 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.070791 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:53:30.070801 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:53:30.070812 | orchestrator | 2026-04-17 07:53:30.070823 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-17 07:53:30.070834 | orchestrator | Friday 17 April 2026 07:52:53 +0000 (0:00:02.714) 0:00:12.156 ********** 2026-04-17 07:53:30.070845 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.070856 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:53:30.070867 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:53:30.070878 | orchestrator | 2026-04-17 07:53:30.070889 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-17 07:53:30.070899 | orchestrator | Friday 17 April 2026 07:52:54 +0000 (0:00:01.310) 0:00:13.467 ********** 2026-04-17 07:53:30.070910 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.070921 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:53:30.070932 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:53:30.070943 | orchestrator | 2026-04-17 07:53:30.070954 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 07:53:30.070965 | orchestrator | Friday 17 April 2026 07:52:55 +0000 (0:00:01.458) 0:00:14.925 ********** 2026-04-17 07:53:30.070976 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.071012 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:53:30.071023 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:53:30.071034 | orchestrator | 2026-04-17 07:53:30.071045 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-17 07:53:30.071056 | orchestrator | Friday 17 April 2026 07:52:57 +0000 (0:00:01.353) 0:00:16.279 ********** 2026-04-17 07:53:30.071066 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.071078 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:53:30.071089 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:53:30.071099 | orchestrator | 2026-04-17 07:53:30.071110 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-17 07:53:30.071121 | orchestrator | Friday 17 April 2026 07:52:58 +0000 (0:00:01.474) 0:00:17.753 ********** 2026-04-17 07:53:30.071132 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.071143 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:53:30.071154 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:53:30.071164 | orchestrator | 2026-04-17 07:53:30.071175 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 07:53:30.071186 | orchestrator | Friday 17 April 2026 07:53:00 +0000 (0:00:01.349) 0:00:19.102 ********** 2026-04-17 07:53:30.071197 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.071208 | orchestrator | 2026-04-17 07:53:30.071219 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 07:53:30.071230 | orchestrator | Friday 17 April 2026 07:53:01 +0000 (0:00:01.327) 0:00:20.430 ********** 2026-04-17 07:53:30.071240 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.071251 | orchestrator | 2026-04-17 07:53:30.071262 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 07:53:30.071273 | orchestrator | Friday 17 April 2026 07:53:02 +0000 (0:00:01.266) 0:00:21.697 ********** 2026-04-17 07:53:30.071283 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.071294 | orchestrator | 2026-04-17 07:53:30.071305 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:53:30.071316 | orchestrator | Friday 17 April 2026 07:53:03 +0000 (0:00:01.281) 0:00:22.979 ********** 2026-04-17 07:53:30.071327 | orchestrator | 2026-04-17 07:53:30.071338 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:53:30.071349 | orchestrator | Friday 17 April 2026 07:53:04 +0000 (0:00:00.463) 0:00:23.443 ********** 2026-04-17 07:53:30.071359 | orchestrator | 2026-04-17 07:53:30.071370 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:53:30.071381 | orchestrator | Friday 17 April 2026 07:53:05 +0000 (0:00:00.667) 0:00:24.110 ********** 2026-04-17 07:53:30.071392 | orchestrator | 2026-04-17 07:53:30.071403 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 07:53:30.071413 | orchestrator | Friday 17 April 2026 07:53:05 +0000 (0:00:00.823) 0:00:24.934 ********** 2026-04-17 07:53:30.071424 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.071435 | orchestrator | 2026-04-17 07:53:30.071446 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-17 07:53:30.071457 | orchestrator | Friday 17 April 2026 07:53:07 +0000 (0:00:01.290) 0:00:26.224 ********** 2026-04-17 07:53:30.071468 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.071478 | orchestrator | 2026-04-17 07:53:30.071507 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-17 07:53:30.071518 | orchestrator | Friday 17 April 2026 07:53:08 +0000 (0:00:01.294) 0:00:27.518 ********** 2026-04-17 07:53:30.071530 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.071541 | orchestrator | 2026-04-17 07:53:30.071552 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-17 07:53:30.071562 | orchestrator | Friday 17 April 2026 07:53:09 +0000 (0:00:01.140) 0:00:28.658 ********** 2026-04-17 07:53:30.071573 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:53:30.071584 | orchestrator | 2026-04-17 07:53:30.071595 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-17 07:53:30.071614 | orchestrator | Friday 17 April 2026 07:53:12 +0000 (0:00:03.036) 0:00:31.695 ********** 2026-04-17 07:53:30.071644 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.071655 | orchestrator | 2026-04-17 07:53:30.071666 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-17 07:53:30.071677 | orchestrator | Friday 17 April 2026 07:53:13 +0000 (0:00:01.306) 0:00:33.001 ********** 2026-04-17 07:53:30.071688 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.071699 | orchestrator | 2026-04-17 07:53:30.071710 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-17 07:53:30.071720 | orchestrator | Friday 17 April 2026 07:53:15 +0000 (0:00:01.336) 0:00:34.338 ********** 2026-04-17 07:53:30.071731 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.071742 | orchestrator | 2026-04-17 07:53:30.071753 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-17 07:53:30.071764 | orchestrator | Friday 17 April 2026 07:53:16 +0000 (0:00:01.148) 0:00:35.487 ********** 2026-04-17 07:53:30.071775 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:53:30.071786 | orchestrator | 2026-04-17 07:53:30.071796 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 07:53:30.071807 | orchestrator | Friday 17 April 2026 07:53:17 +0000 (0:00:01.125) 0:00:36.613 ********** 2026-04-17 07:53:30.071818 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:30.071829 | orchestrator | 2026-04-17 07:53:30.071839 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 07:53:30.071850 | orchestrator | Friday 17 April 2026 07:53:19 +0000 (0:00:01.543) 0:00:38.156 ********** 2026-04-17 07:53:30.071861 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:53:30.071872 | orchestrator | 2026-04-17 07:53:30.071883 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 07:53:30.071894 | orchestrator | Friday 17 April 2026 07:53:20 +0000 (0:00:01.512) 0:00:39.668 ********** 2026-04-17 07:53:30.071905 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:30.071916 | orchestrator | 2026-04-17 07:53:30.071927 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 07:53:30.071937 | orchestrator | Friday 17 April 2026 07:53:23 +0000 (0:00:02.470) 0:00:42.138 ********** 2026-04-17 07:53:30.071948 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:30.071959 | orchestrator | 2026-04-17 07:53:30.071970 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 07:53:30.071981 | orchestrator | Friday 17 April 2026 07:53:24 +0000 (0:00:01.310) 0:00:43.449 ********** 2026-04-17 07:53:30.071991 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:30.072003 | orchestrator | 2026-04-17 07:53:30.072032 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:53:30.072044 | orchestrator | Friday 17 April 2026 07:53:25 +0000 (0:00:01.279) 0:00:44.729 ********** 2026-04-17 07:53:30.072055 | orchestrator | 2026-04-17 07:53:30.072066 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:53:30.072076 | orchestrator | Friday 17 April 2026 07:53:26 +0000 (0:00:00.427) 0:00:45.156 ********** 2026-04-17 07:53:30.072087 | orchestrator | 2026-04-17 07:53:30.072098 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:53:30.072109 | orchestrator | Friday 17 April 2026 07:53:26 +0000 (0:00:00.414) 0:00:45.570 ********** 2026-04-17 07:53:30.072119 | orchestrator | 2026-04-17 07:53:30.072130 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 07:53:30.072141 | orchestrator | Friday 17 April 2026 07:53:27 +0000 (0:00:00.774) 0:00:46.345 ********** 2026-04-17 07:53:30.072152 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:30.072162 | orchestrator | 2026-04-17 07:53:30.072173 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 07:53:30.072184 | orchestrator | Friday 17 April 2026 07:53:29 +0000 (0:00:02.335) 0:00:48.681 ********** 2026-04-17 07:53:30.072201 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-17 07:53:30.072212 | orchestrator |  "msg": [ 2026-04-17 07:53:30.072223 | orchestrator |  "Validator run completed.", 2026-04-17 07:53:30.072234 | orchestrator |  "You can find the report file here:", 2026-04-17 07:53:30.072245 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-17T07:52:44+00:00-report.json", 2026-04-17 07:53:30.072257 | orchestrator |  "on the following host:", 2026-04-17 07:53:30.072273 | orchestrator |  "testbed-manager" 2026-04-17 07:53:30.072284 | orchestrator |  ] 2026-04-17 07:53:30.072295 | orchestrator | } 2026-04-17 07:53:30.072306 | orchestrator | 2026-04-17 07:53:30.072317 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:53:30.072329 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 07:53:30.072341 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:53:30.072360 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:53:31.751711 | orchestrator | 2026-04-17 07:53:31.751804 | orchestrator | 2026-04-17 07:53:31.751817 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:53:31.751828 | orchestrator | Friday 17 April 2026 07:53:31 +0000 (0:00:01.753) 0:00:50.434 ********** 2026-04-17 07:53:31.751837 | orchestrator | =============================================================================== 2026-04-17 07:53:31.751846 | orchestrator | Gather list of mgr modules ---------------------------------------------- 3.04s 2026-04-17 07:53:31.751855 | orchestrator | Get container info ------------------------------------------------------ 2.71s 2026-04-17 07:53:31.751864 | orchestrator | Get timestamp for report file ------------------------------------------- 2.71s 2026-04-17 07:53:31.751872 | orchestrator | Aggregate test results step one ----------------------------------------- 2.47s 2026-04-17 07:53:31.751881 | orchestrator | Write report file ------------------------------------------------------- 2.34s 2026-04-17 07:53:31.751889 | orchestrator | Flush handlers ---------------------------------------------------------- 1.95s 2026-04-17 07:53:31.751898 | orchestrator | Prepare test data for container existance test -------------------------- 1.81s 2026-04-17 07:53:31.751906 | orchestrator | Print report file information ------------------------------------------- 1.75s 2026-04-17 07:53:31.751915 | orchestrator | Flush handlers ---------------------------------------------------------- 1.62s 2026-04-17 07:53:31.751923 | orchestrator | Create report output directory ------------------------------------------ 1.59s 2026-04-17 07:53:31.751932 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.54s 2026-04-17 07:53:31.751940 | orchestrator | Set validation result to failed if a test failed ------------------------ 1.51s 2026-04-17 07:53:31.751949 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 1.47s 2026-04-17 07:53:31.751958 | orchestrator | Set test result to passed if container is existing ---------------------- 1.46s 2026-04-17 07:53:31.751966 | orchestrator | Prepare test data ------------------------------------------------------- 1.35s 2026-04-17 07:53:31.751975 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 1.35s 2026-04-17 07:53:31.751983 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 1.34s 2026-04-17 07:53:31.751992 | orchestrator | Aggregate test results step one ----------------------------------------- 1.33s 2026-04-17 07:53:31.752000 | orchestrator | Aggregate test results step two ----------------------------------------- 1.31s 2026-04-17 07:53:31.752009 | orchestrator | Set test result to failed if container is missing ----------------------- 1.31s 2026-04-17 07:53:31.937900 | orchestrator | + osism validate ceph-osds 2026-04-17 07:53:53.782089 | orchestrator | 2026-04-17 07:53:53.782225 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-17 07:53:53.782243 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-17 07:53:53.782256 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-17 07:53:53.782279 | orchestrator | 2026-04-17 07:53:53.782290 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 07:53:53.782301 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-17 07:53:53.782312 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-17 07:53:53.782333 | orchestrator | Friday 17 April 2026 07:53:48 +0000 (0:00:01.402) 0:00:01.402 ********** 2026-04-17 07:53:53.782345 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:53.782356 | orchestrator | 2026-04-17 07:53:53.782367 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 07:53:53.782378 | orchestrator | Friday 17 April 2026 07:53:49 +0000 (0:00:01.691) 0:00:03.094 ********** 2026-04-17 07:53:53.782389 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:53.782400 | orchestrator | 2026-04-17 07:53:53.782411 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 07:53:53.782422 | orchestrator | Friday 17 April 2026 07:53:50 +0000 (0:00:00.357) 0:00:03.451 ********** 2026-04-17 07:53:53.782432 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 07:53:53.782444 | orchestrator | 2026-04-17 07:53:53.782454 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 07:53:53.782465 | orchestrator | Friday 17 April 2026 07:53:51 +0000 (0:00:00.757) 0:00:04.209 ********** 2026-04-17 07:53:53.782477 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:53:53.782489 | orchestrator | 2026-04-17 07:53:53.782500 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-17 07:53:53.782510 | orchestrator | Friday 17 April 2026 07:53:51 +0000 (0:00:00.130) 0:00:04.340 ********** 2026-04-17 07:53:53.782522 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:53:53.782533 | orchestrator | 2026-04-17 07:53:53.782557 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-17 07:53:53.782571 | orchestrator | Friday 17 April 2026 07:53:51 +0000 (0:00:00.138) 0:00:04.479 ********** 2026-04-17 07:53:53.782584 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:53:53.782597 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:53:53.782610 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:53:53.782650 | orchestrator | 2026-04-17 07:53:53.782664 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-17 07:53:53.782677 | orchestrator | Friday 17 April 2026 07:53:52 +0000 (0:00:00.810) 0:00:05.289 ********** 2026-04-17 07:53:53.782690 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:53:53.782702 | orchestrator | 2026-04-17 07:53:53.782714 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-17 07:53:53.782727 | orchestrator | Friday 17 April 2026 07:53:52 +0000 (0:00:00.176) 0:00:05.466 ********** 2026-04-17 07:53:53.782740 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:53:53.782752 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:53:53.782765 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:53:53.782777 | orchestrator | 2026-04-17 07:53:53.782789 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-17 07:53:53.782802 | orchestrator | Friday 17 April 2026 07:53:52 +0000 (0:00:00.328) 0:00:05.794 ********** 2026-04-17 07:53:53.782821 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:53:53.782840 | orchestrator | 2026-04-17 07:53:53.782865 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 07:53:53.782892 | orchestrator | Friday 17 April 2026 07:53:53 +0000 (0:00:00.359) 0:00:06.153 ********** 2026-04-17 07:53:53.782923 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:53:53.782942 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:53:53.782961 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:53:53.782977 | orchestrator | 2026-04-17 07:53:53.782995 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-17 07:53:53.783014 | orchestrator | Friday 17 April 2026 07:53:53 +0000 (0:00:00.309) 0:00:06.463 ********** 2026-04-17 07:53:53.783034 | orchestrator | skipping: [testbed-node-3] => (item={'id': '04d3cf43c0b242081ba64bde7ace53a9aefcd7441ab38d5e24ec3cdc1422894f', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-17 07:53:53.783058 | orchestrator | skipping: [testbed-node-3] => (item={'id': '56c852ddeff0aa73050eafd77db4e56ee298d5bbb5ce703253d0c7ff220a9d31', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-17 07:53:53.783078 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eea13a8fa52e0beae1501b6e56eda51244231ef2d4e0e78ddf9f807fed217ffe', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2026-04-17 07:53:53.783120 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a856449f63f112c416b7ffb32ad60ad59a463f86a0c9406d6c3ca4a5734551e2', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-17 07:53:53.783134 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aa647286829f4afbd6565e69a354fcb4daaef578ee4acda574a80bf21698053a', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-17 07:53:53.783145 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4da7c2fbc46f8056b458b70a6d20088029d776f4c7fe7b878a585eb4ded29dd8', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-17 07:53:53.783157 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8fd8b9f141e05e0358947df50ac293ca741c1adb81829bfcae6ff4a14ee33b78', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-17 07:53:53.783168 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd462464d5700b8d3a295a6b98537ef66fd6ed7c472e1bdbbac0b0f3deb9fa785', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 07:53:53.783191 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9cbdf32a2765754d1192d954b798c2b8995496f68236bb49342d84228eba8bb8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 07:53:53.783209 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7208b08ca40e6e30b306f7451e54f3e891af3e7a11fd1f71ded767d230a368b7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 07:53:53.783221 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4aadc4b052481287a27d006ab650e70cb6588192844b9089e9dc0196d9bd3d81', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 07:53:53.783234 | orchestrator | ok: [testbed-node-3] => (item={'id': '60b761304747fb1365c425c98f42cfb6f31fa882632c2a6a848e88a23769833b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-17 07:53:53.783254 | orchestrator | ok: [testbed-node-3] => (item={'id': 'a53b48e3a12f5043dd4a454d9b3cedc118c97fa47be06f773b84f899b04a9138', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-17 07:53:53.783266 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0d5ec407ff892d80283e8759751b143d884c6b82ddbeea9889cd912be4beb26b', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.783277 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9415e91548518824253e31e56ec6bfa9c7447ddd9cfa40d5b3b6b56dcd14aa4c', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-17 07:53:53.783288 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f9fcbd9f38af60d678c682c948444b123f8f639156d8f37e008bf96c9cfde9d5', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-17 07:53:53.783299 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8acd4c53a69e85170a74950e5b7ca77c6da31324c8c8c4b70f03e2a2468c9caa', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.783311 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7d4fb5fca13d448a5737e525a7e29312f400930967558814e8b1a4e05f0bbd5f', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.783329 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6bbc6af39fa64b7e899e51aba29102942478581cc6a18fcebbd933f4c5cae830', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.950823 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eeecae9a591c76793490b9e9c19ba94529a06279c6b203dc925e0ccc19e6db83', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-17 07:53:53.950895 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90665483ebe885c3c863b7daa8f23bec1a3dcd62382e9f525e7e18b60f7360af', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-17 07:53:53.950903 | orchestrator | skipping: [testbed-node-4] => (item={'id': '82a32945816f2d71e364b16d536ec02d28bdfd3a22d924e2c23deaefd22f41e1', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2026-04-17 07:53:53.950908 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3e533e1ddae631a8294871a1ae8a841f4f9faf5857d9549c3d3907f4ef440502', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-17 07:53:53.950914 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f684dfb61733621f36e32c7080badfadad38180cb6a4f9854720e46afc095736', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-17 07:53:53.950931 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5859abdec2dc6adecb82434e168398c46d59e5607db7962b8ddc49287e817faf', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-17 07:53:53.950952 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b150f215a3d013d2ea384fc20dd41290b14cdc6f0092c1dd6a7d83927452fd43', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-17 07:53:53.950958 | orchestrator | skipping: [testbed-node-4] => (item={'id': '55f467e4e56f60c03a7921ae1b961852e357ef4e926816786b9326809fb5de45', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 07:53:53.950965 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e9191d45f8674c9e52c74bc81b01607d3aeef2d2784d2ed7d46b99076b5a1033', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 07:53:53.950971 | orchestrator | skipping: [testbed-node-4] => (item={'id': '789223389d1dea3cc3e8745cd4430a008d08bd9578f7dedc097fc5f178d3cb32', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 07:53:53.950976 | orchestrator | skipping: [testbed-node-4] => (item={'id': '06b84c005755a161512bf9dba2f80fe956990f3af9e381b1c88a168b8eea44e0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.950982 | orchestrator | ok: [testbed-node-4] => (item={'id': '4440e707ac187bcdf9351dd26e3ed9f4c185bd0f0f358a8dd75cd9de1302e3d2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-17 07:53:53.950988 | orchestrator | ok: [testbed-node-4] => (item={'id': 'f9f6a0fb2bad68ea4f0565f9d162e842e2fc9e79400b8aac04afcf024a8e318a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-17 07:53:53.950993 | orchestrator | skipping: [testbed-node-4] => (item={'id': '68a2ac380d909974b6bd85b627529899da0761b181790b98dd486bf95a05cde0', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.951009 | orchestrator | skipping: [testbed-node-4] => (item={'id': '31061fb26b4a498f37bce647d6798798783a79888e87177d3d2495609dd057a0', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-17 07:53:53.951015 | orchestrator | skipping: [testbed-node-4] => (item={'id': '17e0bd3cc8ab681f7f458199356b6e3e75b7e31d7b44999f804fdbcf4376c195', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-17 07:53:53.951021 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8f722f2258cc54daa4ab43333f50ef3b55dfc09ba100f7c0b0d59c1773f3c382', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.951026 | orchestrator | skipping: [testbed-node-4] => (item={'id': '39bafeda64cd95a0c4490d72856308cf5c4bff83614958811902b9edba7d1346', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.951031 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f48d6b3b1339b1871b91051a2351548c95037020dd36768535a27457dbca2686', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:53:53.951037 | orchestrator | skipping: [testbed-node-5] => (item={'id': '38bbf0ab3aed3625daa4ba2143d83174a77a269d543cf71e7b3aeee53a1d93f9', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-17 07:53:53.951047 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f7ea19de477a53d73deed936b8477f857e95748bad89b68606ab001fa27cb409', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-17 07:53:53.951053 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f2f88310e2eca4e3dbd39784cf92f022cdd6d082d9cf4c180846ac3f74c0afa', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2026-04-17 07:53:53.951058 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6e4a011000e25a26a9b4536b2f8d0ff3c4c39aaa979283d53dce2de6ca49b9d3', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-17 07:53:53.951063 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fc76d03421703268228bb59b055d44552e5a174507596eb05d5f720908057419', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-17 07:53:53.951073 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c8d14032212e584e8d6285f6aca741aad05c44a8be1d2041fe564631cfc0e1c', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-17 07:53:53.951079 | orchestrator | skipping: [testbed-node-5] => (item={'id': '87d79cc5add09480302be6a47b445923c960ae8735f2ba2d2ef48685fcf036a4', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-17 07:53:53.951084 | orchestrator | skipping: [testbed-node-5] => (item={'id': '75521316ac4be6e587b675020bb7ecbe0d8e401ca8d272b28ef57b7edfd5482e', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-17 07:53:53.951089 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd3251b54ac0a146a3f08e46a61742a47f275df338ef6225c55798cfd385ce72a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 07:53:53.951098 | orchestrator | skipping: [testbed-node-5] => (item={'id': '251870a3122a5e9dcd05b56963e8baa547f66e34349b278fddcf2872c93b1d03', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 07:54:02.690909 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7388e4869ee1eae7625c10ab9144b6ceb193257d8ce631b1d4d5ed229033b8bd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-17 07:54:02.691051 | orchestrator | ok: [testbed-node-5] => (item={'id': '5d93b39b661ee610276b1ed62e9c6c9bba608b846b86a461d66e62b316574f7e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-17 07:54:02.691082 | orchestrator | ok: [testbed-node-5] => (item={'id': '401cf7253b3bb9fb655eb18a3c40ce71c2fca5637ae173e94de13252226fb38b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-17 07:54:02.691103 | orchestrator | skipping: [testbed-node-5] => (item={'id': '648e5bbc26a7f6c7c1a28cb7c7f94e33f83db76c69041480e433d9ef82381cd8', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:54:02.691154 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11c8d38553afb6fd7b895b47377f56c45e1e7a5ed2aa98dc6e2f036215b8f7cc', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-17 07:54:02.691196 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4da1df3022839057b75fb5c23a4c75ed7982f755b059079ddd87e79e497de63e', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-17 07:54:02.691209 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a1ea98dbf89d8be0b3a5a29a2d936368e3f92eafd345e61eb2ca2f17a63336ed', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:54:02.691222 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cb0cedfd20d76d5d2f1a75c80df30926a8822ed813e4caf15f679b12b7b0d88b', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:54:02.691233 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3f1a9db202b2351aec244d6aaf1aed18fc120fb24451a6c82494b436b3aed690', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-17 07:54:02.691245 | orchestrator | 2026-04-17 07:54:02.691259 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-17 07:54:02.691271 | orchestrator | Friday 17 April 2026 07:53:54 +0000 (0:00:00.743) 0:00:07.207 ********** 2026-04-17 07:54:02.691312 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.691325 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:02.691336 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:02.691346 | orchestrator | 2026-04-17 07:54:02.691358 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-17 07:54:02.691369 | orchestrator | Friday 17 April 2026 07:53:54 +0000 (0:00:00.313) 0:00:07.520 ********** 2026-04-17 07:54:02.691380 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.691392 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:54:02.691402 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:54:02.691413 | orchestrator | 2026-04-17 07:54:02.691424 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-17 07:54:02.691436 | orchestrator | Friday 17 April 2026 07:53:54 +0000 (0:00:00.313) 0:00:07.834 ********** 2026-04-17 07:54:02.691450 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.691462 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:02.691475 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:02.691487 | orchestrator | 2026-04-17 07:54:02.691500 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 07:54:02.691513 | orchestrator | Friday 17 April 2026 07:53:55 +0000 (0:00:00.350) 0:00:08.184 ********** 2026-04-17 07:54:02.691525 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.691537 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:02.691550 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:02.691562 | orchestrator | 2026-04-17 07:54:02.691575 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-17 07:54:02.691594 | orchestrator | Friday 17 April 2026 07:53:55 +0000 (0:00:00.556) 0:00:08.741 ********** 2026-04-17 07:54:02.691614 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-17 07:54:02.691672 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-17 07:54:02.691691 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.691722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-17 07:54:02.691775 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-17 07:54:02.691796 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:54:02.691812 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-17 07:54:02.691827 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-17 07:54:02.691842 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:54:02.691857 | orchestrator | 2026-04-17 07:54:02.691872 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-17 07:54:02.691888 | orchestrator | Friday 17 April 2026 07:53:55 +0000 (0:00:00.332) 0:00:09.073 ********** 2026-04-17 07:54:02.691904 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.691921 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:02.691939 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:02.691958 | orchestrator | 2026-04-17 07:54:02.691977 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-17 07:54:02.691995 | orchestrator | Friday 17 April 2026 07:53:56 +0000 (0:00:00.322) 0:00:09.396 ********** 2026-04-17 07:54:02.692015 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.692035 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:54:02.692054 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:54:02.692073 | orchestrator | 2026-04-17 07:54:02.692084 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-17 07:54:02.692095 | orchestrator | Friday 17 April 2026 07:53:56 +0000 (0:00:00.522) 0:00:09.919 ********** 2026-04-17 07:54:02.692109 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.692128 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:54:02.692146 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:54:02.692164 | orchestrator | 2026-04-17 07:54:02.692182 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-17 07:54:02.692198 | orchestrator | Friday 17 April 2026 07:53:57 +0000 (0:00:00.338) 0:00:10.258 ********** 2026-04-17 07:54:02.692215 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.692233 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:02.692251 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:02.692270 | orchestrator | 2026-04-17 07:54:02.692288 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 07:54:02.692318 | orchestrator | Friday 17 April 2026 07:53:57 +0000 (0:00:00.359) 0:00:10.617 ********** 2026-04-17 07:54:02.692339 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.692357 | orchestrator | 2026-04-17 07:54:02.692375 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 07:54:02.692394 | orchestrator | Friday 17 April 2026 07:53:57 +0000 (0:00:00.272) 0:00:10.890 ********** 2026-04-17 07:54:02.692413 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.692433 | orchestrator | 2026-04-17 07:54:02.692451 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 07:54:02.692468 | orchestrator | Friday 17 April 2026 07:53:58 +0000 (0:00:00.294) 0:00:11.185 ********** 2026-04-17 07:54:02.692479 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.692490 | orchestrator | 2026-04-17 07:54:02.692501 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:54:02.692519 | orchestrator | Friday 17 April 2026 07:53:58 +0000 (0:00:00.274) 0:00:11.459 ********** 2026-04-17 07:54:02.692536 | orchestrator | 2026-04-17 07:54:02.692556 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:54:02.692575 | orchestrator | Friday 17 April 2026 07:53:58 +0000 (0:00:00.070) 0:00:11.530 ********** 2026-04-17 07:54:02.692593 | orchestrator | 2026-04-17 07:54:02.692612 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:54:02.692676 | orchestrator | Friday 17 April 2026 07:53:58 +0000 (0:00:00.262) 0:00:11.793 ********** 2026-04-17 07:54:02.692711 | orchestrator | 2026-04-17 07:54:02.692731 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 07:54:02.692749 | orchestrator | Friday 17 April 2026 07:53:58 +0000 (0:00:00.075) 0:00:11.868 ********** 2026-04-17 07:54:02.692768 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.692787 | orchestrator | 2026-04-17 07:54:02.692806 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-17 07:54:02.692825 | orchestrator | Friday 17 April 2026 07:53:59 +0000 (0:00:00.287) 0:00:12.156 ********** 2026-04-17 07:54:02.692844 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:02.692865 | orchestrator | 2026-04-17 07:54:02.692883 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 07:54:02.692901 | orchestrator | Friday 17 April 2026 07:53:59 +0000 (0:00:00.271) 0:00:12.427 ********** 2026-04-17 07:54:02.692920 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.692938 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:02.692955 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:02.692973 | orchestrator | 2026-04-17 07:54:02.692991 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-17 07:54:02.693007 | orchestrator | Friday 17 April 2026 07:53:59 +0000 (0:00:00.324) 0:00:12.752 ********** 2026-04-17 07:54:02.693023 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.693040 | orchestrator | 2026-04-17 07:54:02.693057 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-17 07:54:02.693074 | orchestrator | Friday 17 April 2026 07:53:59 +0000 (0:00:00.255) 0:00:13.007 ********** 2026-04-17 07:54:02.693092 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 07:54:02.693110 | orchestrator | 2026-04-17 07:54:02.693127 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-17 07:54:02.693143 | orchestrator | Friday 17 April 2026 07:54:02 +0000 (0:00:02.327) 0:00:15.335 ********** 2026-04-17 07:54:02.693160 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.693178 | orchestrator | 2026-04-17 07:54:02.693195 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-17 07:54:02.693212 | orchestrator | Friday 17 April 2026 07:54:02 +0000 (0:00:00.146) 0:00:15.481 ********** 2026-04-17 07:54:02.693230 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:02.693249 | orchestrator | 2026-04-17 07:54:02.693268 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-17 07:54:02.693309 | orchestrator | Friday 17 April 2026 07:54:02 +0000 (0:00:00.324) 0:00:15.805 ********** 2026-04-17 07:54:17.306907 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:17.307011 | orchestrator | 2026-04-17 07:54:17.307027 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-17 07:54:17.307039 | orchestrator | Friday 17 April 2026 07:54:03 +0000 (0:00:00.330) 0:00:16.136 ********** 2026-04-17 07:54:17.307048 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.307059 | orchestrator | 2026-04-17 07:54:17.307069 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 07:54:17.307078 | orchestrator | Friday 17 April 2026 07:54:03 +0000 (0:00:00.148) 0:00:16.284 ********** 2026-04-17 07:54:17.307088 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.307098 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:17.307107 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:17.307116 | orchestrator | 2026-04-17 07:54:17.307126 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-17 07:54:17.307135 | orchestrator | Friday 17 April 2026 07:54:03 +0000 (0:00:00.330) 0:00:16.615 ********** 2026-04-17 07:54:17.307145 | orchestrator | changed: [testbed-node-3] 2026-04-17 07:54:17.307154 | orchestrator | changed: [testbed-node-4] 2026-04-17 07:54:17.307164 | orchestrator | changed: [testbed-node-5] 2026-04-17 07:54:17.307173 | orchestrator | 2026-04-17 07:54:17.307183 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-17 07:54:17.307192 | orchestrator | Friday 17 April 2026 07:54:06 +0000 (0:00:02.647) 0:00:19.262 ********** 2026-04-17 07:54:17.307225 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.307244 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:17.307260 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:17.307276 | orchestrator | 2026-04-17 07:54:17.307292 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-17 07:54:17.307308 | orchestrator | Friday 17 April 2026 07:54:06 +0000 (0:00:00.524) 0:00:19.786 ********** 2026-04-17 07:54:17.307323 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.307339 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:17.307356 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:17.307369 | orchestrator | 2026-04-17 07:54:17.307383 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-17 07:54:17.307397 | orchestrator | Friday 17 April 2026 07:54:07 +0000 (0:00:00.513) 0:00:20.300 ********** 2026-04-17 07:54:17.307429 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:17.307448 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:54:17.307464 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:54:17.307481 | orchestrator | 2026-04-17 07:54:17.307497 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-17 07:54:17.307514 | orchestrator | Friday 17 April 2026 07:54:07 +0000 (0:00:00.306) 0:00:20.607 ********** 2026-04-17 07:54:17.307532 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.307549 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:17.307562 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:17.307573 | orchestrator | 2026-04-17 07:54:17.307585 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-17 07:54:17.307597 | orchestrator | Friday 17 April 2026 07:54:07 +0000 (0:00:00.335) 0:00:20.942 ********** 2026-04-17 07:54:17.307607 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:17.307618 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:54:17.307660 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:54:17.307672 | orchestrator | 2026-04-17 07:54:17.307683 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-17 07:54:17.307694 | orchestrator | Friday 17 April 2026 07:54:08 +0000 (0:00:00.540) 0:00:21.483 ********** 2026-04-17 07:54:17.307704 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:17.307716 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:54:17.307727 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:54:17.307738 | orchestrator | 2026-04-17 07:54:17.307749 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 07:54:17.307760 | orchestrator | Friday 17 April 2026 07:54:08 +0000 (0:00:00.312) 0:00:21.796 ********** 2026-04-17 07:54:17.307771 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.307782 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:17.307793 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:17.307804 | orchestrator | 2026-04-17 07:54:17.307815 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-17 07:54:17.307825 | orchestrator | Friday 17 April 2026 07:54:09 +0000 (0:00:00.488) 0:00:22.284 ********** 2026-04-17 07:54:17.307834 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.307844 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:17.307853 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:17.307863 | orchestrator | 2026-04-17 07:54:17.307873 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-17 07:54:17.307882 | orchestrator | Friday 17 April 2026 07:54:09 +0000 (0:00:00.500) 0:00:22.785 ********** 2026-04-17 07:54:17.307892 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.307901 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:17.307911 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:17.307920 | orchestrator | 2026-04-17 07:54:17.307930 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-17 07:54:17.307940 | orchestrator | Friday 17 April 2026 07:54:10 +0000 (0:00:00.561) 0:00:23.346 ********** 2026-04-17 07:54:17.307949 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:17.307959 | orchestrator | skipping: [testbed-node-4] 2026-04-17 07:54:17.307978 | orchestrator | skipping: [testbed-node-5] 2026-04-17 07:54:17.307987 | orchestrator | 2026-04-17 07:54:17.307997 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-17 07:54:17.308007 | orchestrator | Friday 17 April 2026 07:54:10 +0000 (0:00:00.317) 0:00:23.664 ********** 2026-04-17 07:54:17.308016 | orchestrator | ok: [testbed-node-3] 2026-04-17 07:54:17.308026 | orchestrator | ok: [testbed-node-4] 2026-04-17 07:54:17.308035 | orchestrator | ok: [testbed-node-5] 2026-04-17 07:54:17.308045 | orchestrator | 2026-04-17 07:54:17.308054 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 07:54:17.308064 | orchestrator | Friday 17 April 2026 07:54:10 +0000 (0:00:00.368) 0:00:24.033 ********** 2026-04-17 07:54:17.308074 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 07:54:17.308083 | orchestrator | 2026-04-17 07:54:17.308093 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 07:54:17.308103 | orchestrator | Friday 17 April 2026 07:54:11 +0000 (0:00:00.279) 0:00:24.313 ********** 2026-04-17 07:54:17.308130 | orchestrator | skipping: [testbed-node-3] 2026-04-17 07:54:17.308140 | orchestrator | 2026-04-17 07:54:17.308150 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 07:54:17.308159 | orchestrator | Friday 17 April 2026 07:54:11 +0000 (0:00:00.500) 0:00:24.814 ********** 2026-04-17 07:54:17.308169 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 07:54:17.308179 | orchestrator | 2026-04-17 07:54:17.308188 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 07:54:17.308198 | orchestrator | Friday 17 April 2026 07:54:13 +0000 (0:00:02.152) 0:00:26.966 ********** 2026-04-17 07:54:17.308208 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 07:54:17.308217 | orchestrator | 2026-04-17 07:54:17.308227 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 07:54:17.308236 | orchestrator | Friday 17 April 2026 07:54:14 +0000 (0:00:00.345) 0:00:27.312 ********** 2026-04-17 07:54:17.308246 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 07:54:17.308256 | orchestrator | 2026-04-17 07:54:17.308265 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:54:17.308275 | orchestrator | Friday 17 April 2026 07:54:14 +0000 (0:00:00.302) 0:00:27.615 ********** 2026-04-17 07:54:17.308284 | orchestrator | 2026-04-17 07:54:17.308294 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:54:17.308304 | orchestrator | Friday 17 April 2026 07:54:14 +0000 (0:00:00.088) 0:00:27.703 ********** 2026-04-17 07:54:17.308313 | orchestrator | 2026-04-17 07:54:17.308323 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 07:54:17.308333 | orchestrator | Friday 17 April 2026 07:54:14 +0000 (0:00:00.081) 0:00:27.784 ********** 2026-04-17 07:54:17.308342 | orchestrator | 2026-04-17 07:54:17.308352 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 07:54:17.308361 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-17 07:54:17.308372 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-17 07:54:17.308397 | orchestrator | Friday 17 April 2026 07:54:14 +0000 (0:00:00.077) 0:00:27.862 ********** 2026-04-17 07:54:17.308407 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 07:54:17.308416 | orchestrator | 2026-04-17 07:54:17.308426 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 07:54:17.308435 | orchestrator | Friday 17 April 2026 07:54:16 +0000 (0:00:01.333) 0:00:29.195 ********** 2026-04-17 07:54:17.308445 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-17 07:54:17.308455 | orchestrator |  "msg": [ 2026-04-17 07:54:17.308465 | orchestrator |  "Validator run completed.", 2026-04-17 07:54:17.308480 | orchestrator |  "You can find the report file here:", 2026-04-17 07:54:17.308490 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-17T07:53:49+00:00-report.json", 2026-04-17 07:54:17.308500 | orchestrator |  "on the following host:", 2026-04-17 07:54:17.308510 | orchestrator |  "testbed-manager" 2026-04-17 07:54:17.308520 | orchestrator |  ] 2026-04-17 07:54:17.308530 | orchestrator | } 2026-04-17 07:54:17.308540 | orchestrator | 2026-04-17 07:54:17.308549 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:54:17.308560 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 07:54:17.308571 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 07:54:17.308580 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 07:54:17.308590 | orchestrator | 2026-04-17 07:54:17.308600 | orchestrator | 2026-04-17 07:54:17.308609 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:54:17.308619 | orchestrator | Friday 17 April 2026 07:54:17 +0000 (0:00:01.209) 0:00:30.404 ********** 2026-04-17 07:54:17.308654 | orchestrator | =============================================================================== 2026-04-17 07:54:17.308665 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.65s 2026-04-17 07:54:17.308674 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.33s 2026-04-17 07:54:17.308684 | orchestrator | Aggregate test results step one ----------------------------------------- 2.15s 2026-04-17 07:54:17.308693 | orchestrator | Get timestamp for report file ------------------------------------------- 1.69s 2026-04-17 07:54:17.308703 | orchestrator | Write report file ------------------------------------------------------- 1.33s 2026-04-17 07:54:17.308712 | orchestrator | Print report file information ------------------------------------------- 1.21s 2026-04-17 07:54:17.308722 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.81s 2026-04-17 07:54:17.308731 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2026-04-17 07:54:17.308741 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.74s 2026-04-17 07:54:17.308750 | orchestrator | Calculate sub test expression results ----------------------------------- 0.56s 2026-04-17 07:54:17.308760 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2026-04-17 07:54:17.308769 | orchestrator | Fail if count of unencrypted OSDs does not match ------------------------ 0.54s 2026-04-17 07:54:17.308785 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.52s 2026-04-17 07:54:17.726987 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.52s 2026-04-17 07:54:17.727063 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2026-04-17 07:54:17.727070 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.50s 2026-04-17 07:54:17.727075 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2026-04-17 07:54:17.727080 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-04-17 07:54:17.727085 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2026-04-17 07:54:17.727089 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.37s 2026-04-17 07:54:17.939743 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-17 07:54:17.948057 | orchestrator | + set -e 2026-04-17 07:54:17.948128 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 07:54:17.948150 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 07:54:17.948171 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 07:54:17.948191 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 07:54:17.948350 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 07:54:17.948367 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 07:54:17.948379 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 07:54:17.948390 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-17 07:54:17.948401 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-17 07:54:17.948412 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 07:54:17.948423 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 07:54:17.948433 | orchestrator | ++ export ARA=false 2026-04-17 07:54:17.948444 | orchestrator | ++ ARA=false 2026-04-17 07:54:17.948455 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 07:54:17.948466 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 07:54:17.948476 | orchestrator | ++ export TEMPEST=false 2026-04-17 07:54:17.948487 | orchestrator | ++ TEMPEST=false 2026-04-17 07:54:17.948498 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 07:54:17.948508 | orchestrator | ++ IS_ZUUL=true 2026-04-17 07:54:17.948519 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 07:54:17.948530 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.96 2026-04-17 07:54:17.948541 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 07:54:17.948552 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 07:54:17.948562 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 07:54:17.948573 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 07:54:17.948584 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 07:54:17.948595 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 07:54:17.948605 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 07:54:17.948617 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 07:54:17.948661 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-17 07:54:17.948673 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-17 07:54:17.948684 | orchestrator | + source /etc/os-release 2026-04-17 07:54:17.948695 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-17 07:54:17.948706 | orchestrator | ++ NAME=Ubuntu 2026-04-17 07:54:17.948716 | orchestrator | ++ VERSION_ID=24.04 2026-04-17 07:54:17.948727 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-17 07:54:17.948738 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-17 07:54:17.948748 | orchestrator | ++ ID=ubuntu 2026-04-17 07:54:17.948759 | orchestrator | ++ ID_LIKE=debian 2026-04-17 07:54:17.948770 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-17 07:54:17.948781 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-17 07:54:17.948791 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-17 07:54:17.948814 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-17 07:54:17.948826 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-17 07:54:17.948837 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-17 07:54:17.948847 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-17 07:54:17.948859 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-17 07:54:17.948872 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-17 07:54:17.974997 | orchestrator | 2026-04-17 07:54:17.975060 | orchestrator | # Status of Elasticsearch 2026-04-17 07:54:17.975069 | orchestrator | 2026-04-17 07:54:17.975077 | orchestrator | + pushd /opt/configuration/contrib 2026-04-17 07:54:17.975085 | orchestrator | + echo 2026-04-17 07:54:17.975092 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-17 07:54:17.975099 | orchestrator | + echo 2026-04-17 07:54:17.975106 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-17 07:54:18.159022 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-17 07:54:18.159117 | orchestrator | 2026-04-17 07:54:18.159132 | orchestrator | # Status of MariaDB 2026-04-17 07:54:18.159144 | orchestrator | 2026-04-17 07:54:18.159156 | orchestrator | + echo 2026-04-17 07:54:18.159168 | orchestrator | + echo '# Status of MariaDB' 2026-04-17 07:54:18.159179 | orchestrator | + echo 2026-04-17 07:54:18.159405 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-17 07:54:18.217688 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 07:54:18.217778 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-17 07:54:18.217792 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-17 07:54:18.217805 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-17 07:54:18.294959 | orchestrator | Reading package lists... 2026-04-17 07:54:18.639119 | orchestrator | Building dependency tree... 2026-04-17 07:54:18.641414 | orchestrator | Reading state information... 2026-04-17 07:54:19.057902 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-17 07:54:19.058002 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2026-04-17 07:54:19.730606 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-17 07:54:19.731408 | orchestrator | 2026-04-17 07:54:19.731462 | orchestrator | # Status of Prometheus 2026-04-17 07:54:19.731477 | orchestrator | 2026-04-17 07:54:19.731490 | orchestrator | + echo 2026-04-17 07:54:19.731504 | orchestrator | + echo '# Status of Prometheus' 2026-04-17 07:54:19.731515 | orchestrator | + echo 2026-04-17 07:54:19.731531 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-17 07:54:19.804091 | orchestrator | Unauthorized 2026-04-17 07:54:19.810504 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-17 07:54:19.868091 | orchestrator | Unauthorized 2026-04-17 07:54:19.871586 | orchestrator | 2026-04-17 07:54:19.871669 | orchestrator | # Status of RabbitMQ 2026-04-17 07:54:19.871681 | orchestrator | 2026-04-17 07:54:19.871690 | orchestrator | + echo 2026-04-17 07:54:19.871699 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-17 07:54:19.871708 | orchestrator | + echo 2026-04-17 07:54:19.873290 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-17 07:54:19.927261 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 07:54:19.927329 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-17 07:54:19.927338 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-17 07:54:20.486912 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-17 07:54:20.497351 | orchestrator | 2026-04-17 07:54:20.497422 | orchestrator | # Status of Redis 2026-04-17 07:54:20.497429 | orchestrator | 2026-04-17 07:54:20.497434 | orchestrator | + echo 2026-04-17 07:54:20.497440 | orchestrator | + echo '# Status of Redis' 2026-04-17 07:54:20.497445 | orchestrator | + echo 2026-04-17 07:54:20.497452 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-17 07:54:20.507584 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001955s;;;0.000000;10.000000 2026-04-17 07:54:20.508381 | orchestrator | 2026-04-17 07:54:20.508466 | orchestrator | # Create backup of MariaDB database 2026-04-17 07:54:20.508482 | orchestrator | 2026-04-17 07:54:20.508495 | orchestrator | + popd 2026-04-17 07:54:20.508506 | orchestrator | + echo 2026-04-17 07:54:20.508517 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-17 07:54:20.508528 | orchestrator | + echo 2026-04-17 07:54:20.508539 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-17 07:54:21.794299 | orchestrator | 2026-04-17 07:54:21 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-17 07:54:21.860303 | orchestrator | 2026-04-17 07:54:21 | INFO  | Task 0ccf58ca-48c2-400e-a977-066c028f54e3 (mariadb_backup) was prepared for execution. 2026-04-17 07:54:21.860387 | orchestrator | 2026-04-17 07:54:21 | INFO  | It takes a moment until task 0ccf58ca-48c2-400e-a977-066c028f54e3 (mariadb_backup) has been started and output is visible here. 2026-04-17 07:55:18.236797 | orchestrator | 2026-04-17 07:55:18.236941 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 07:55:18.236966 | orchestrator | 2026-04-17 07:55:18.236986 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 07:55:18.237006 | orchestrator | Friday 17 April 2026 07:54:27 +0000 (0:00:02.388) 0:00:02.388 ********** 2026-04-17 07:55:18.237025 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:55:18.237045 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:55:18.237064 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:55:18.237082 | orchestrator | 2026-04-17 07:55:18.237100 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 07:55:18.237117 | orchestrator | Friday 17 April 2026 07:54:29 +0000 (0:00:02.285) 0:00:04.674 ********** 2026-04-17 07:55:18.237135 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-17 07:55:18.237155 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-17 07:55:18.237206 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-17 07:55:18.237224 | orchestrator | 2026-04-17 07:55:18.237242 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-17 07:55:18.237260 | orchestrator | 2026-04-17 07:55:18.237299 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-17 07:55:18.237320 | orchestrator | Friday 17 April 2026 07:54:32 +0000 (0:00:02.521) 0:00:07.196 ********** 2026-04-17 07:55:18.237339 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 07:55:18.237357 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 07:55:18.237375 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 07:55:18.237394 | orchestrator | 2026-04-17 07:55:18.237413 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 07:55:18.237433 | orchestrator | Friday 17 April 2026 07:54:33 +0000 (0:00:01.463) 0:00:08.659 ********** 2026-04-17 07:55:18.237453 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 07:55:18.237473 | orchestrator | 2026-04-17 07:55:18.237490 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-17 07:55:18.237509 | orchestrator | Friday 17 April 2026 07:54:36 +0000 (0:00:03.002) 0:00:11.662 ********** 2026-04-17 07:55:18.237527 | orchestrator | ok: [testbed-node-1] 2026-04-17 07:55:18.237548 | orchestrator | ok: [testbed-node-0] 2026-04-17 07:55:18.237570 | orchestrator | ok: [testbed-node-2] 2026-04-17 07:55:18.237592 | orchestrator | 2026-04-17 07:55:18.237610 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-17 07:55:18.237659 | orchestrator | Friday 17 April 2026 07:54:41 +0000 (0:00:04.793) 0:00:16.456 ********** 2026-04-17 07:55:18.237679 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:55:18.237698 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:55:18.237716 | orchestrator | changed: [testbed-node-0] 2026-04-17 07:55:18.237735 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-17 07:55:18.237753 | orchestrator | 2026-04-17 07:55:18.237771 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-17 07:55:18.237789 | orchestrator | skipping: no hosts matched 2026-04-17 07:55:18.237807 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-17 07:55:18.237825 | orchestrator | 2026-04-17 07:55:18.237844 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-17 07:55:18.237861 | orchestrator | skipping: no hosts matched 2026-04-17 07:55:18.237879 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-17 07:55:18.237896 | orchestrator | mariadb_bootstrap_restart 2026-04-17 07:55:18.237914 | orchestrator | 2026-04-17 07:55:18.237931 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-17 07:55:18.237950 | orchestrator | skipping: no hosts matched 2026-04-17 07:55:18.237970 | orchestrator | 2026-04-17 07:55:18.237988 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-17 07:55:18.238008 | orchestrator | 2026-04-17 07:55:18.238104 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-17 07:55:18.238124 | orchestrator | Friday 17 April 2026 07:55:14 +0000 (0:00:32.964) 0:00:49.420 ********** 2026-04-17 07:55:18.238144 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:55:18.238164 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:55:18.238184 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:55:18.238203 | orchestrator | 2026-04-17 07:55:18.238223 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-17 07:55:18.238241 | orchestrator | Friday 17 April 2026 07:55:16 +0000 (0:00:01.484) 0:00:50.905 ********** 2026-04-17 07:55:18.238261 | orchestrator | skipping: [testbed-node-0] 2026-04-17 07:55:18.238281 | orchestrator | skipping: [testbed-node-1] 2026-04-17 07:55:18.238302 | orchestrator | skipping: [testbed-node-2] 2026-04-17 07:55:18.238339 | orchestrator | 2026-04-17 07:55:18.238360 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:55:18.238382 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 07:55:18.238402 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 07:55:18.238423 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 07:55:18.238443 | orchestrator | 2026-04-17 07:55:18.238463 | orchestrator | 2026-04-17 07:55:18.238482 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:55:18.238502 | orchestrator | Friday 17 April 2026 07:55:17 +0000 (0:00:01.703) 0:00:52.608 ********** 2026-04-17 07:55:18.238522 | orchestrator | =============================================================================== 2026-04-17 07:55:18.238542 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 32.96s 2026-04-17 07:55:18.238589 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 4.79s 2026-04-17 07:55:18.238607 | orchestrator | mariadb : include_tasks ------------------------------------------------- 3.00s 2026-04-17 07:55:18.238653 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.52s 2026-04-17 07:55:18.238671 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.29s 2026-04-17 07:55:18.238689 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 1.70s 2026-04-17 07:55:18.238707 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 1.48s 2026-04-17 07:55:18.238726 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 1.46s 2026-04-17 07:55:18.416731 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-17 07:55:18.424180 | orchestrator | + set -e 2026-04-17 07:55:18.424240 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 07:55:18.424254 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 07:55:18.424266 | orchestrator | ++ INTERACTIVE=false 2026-04-17 07:55:18.424277 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 07:55:18.424289 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 07:55:18.424300 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 07:55:18.425235 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 07:55:18.431839 | orchestrator | 2026-04-17 07:55:18.431867 | orchestrator | # OpenStack endpoints 2026-04-17 07:55:18.431880 | orchestrator | 2026-04-17 07:55:18.431891 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-17 07:55:18.431902 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-17 07:55:18.431913 | orchestrator | + export OS_CLOUD=admin 2026-04-17 07:55:18.431924 | orchestrator | + OS_CLOUD=admin 2026-04-17 07:55:18.431935 | orchestrator | + echo 2026-04-17 07:55:18.431946 | orchestrator | + echo '# OpenStack endpoints' 2026-04-17 07:55:18.431957 | orchestrator | + echo 2026-04-17 07:55:18.431968 | orchestrator | + openstack endpoint list 2026-04-17 07:55:21.553454 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 07:55:21.553552 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-17 07:55:21.553568 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 07:55:21.553580 | orchestrator | | 1583874a3a854f88ae2dd7661342cd56 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-17 07:55:21.553591 | orchestrator | | 17bb988afc5f4af2ab9226b869dd2c14 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-17 07:55:21.553691 | orchestrator | | 1a54bcbd716547209ee14dd4544e7bc2 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-17 07:55:21.553705 | orchestrator | | 21e6acc2cfbe492780c7b7898760048b | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-17 07:55:21.553716 | orchestrator | | 2ca49976c6774efdad26fd92aec8c4f6 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-17 07:55:21.553727 | orchestrator | | 2e074997e0d44eaca5fbfa8640f76cdc | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-17 07:55:21.553738 | orchestrator | | 2facb37b063a4bdea688c9a6b189c931 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-17 07:55:21.553748 | orchestrator | | 3e1b87a3a4ae4eec96c84a31e231ff23 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-17 07:55:21.553760 | orchestrator | | 406691b8452e499ca61f11a0a0ac6a00 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-17 07:55:21.553771 | orchestrator | | 4641d5511fc04a14bc35c7ca3f41a9d8 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-17 07:55:21.553782 | orchestrator | | 4d01b9be3cdc408c867a33403d9d1fca | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-17 07:55:21.553793 | orchestrator | | 5c7335f6ccc14893b03c6693a1e2c496 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-17 07:55:21.553804 | orchestrator | | 61470b1094334e55a70af8528263109b | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-17 07:55:21.553815 | orchestrator | | 63fde595c8584042a01b658fef870ca6 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-17 07:55:21.553825 | orchestrator | | 67e90a3ab684402d89a71eef7bbf0eb2 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-17 07:55:21.553836 | orchestrator | | 96f121b9132140fb9ec7aa0107e961bd | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-17 07:55:21.553847 | orchestrator | | 97f3948d0e3f4194b0d102d45b13fe5f | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-17 07:55:21.553858 | orchestrator | | a4c83bcdd9a04f2582000e99a6747a8d | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-17 07:55:21.553884 | orchestrator | | ae6c659dea3b4bf885cc366671eb521e | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-17 07:55:21.553896 | orchestrator | | b047ab8f78e542128272a3da56722374 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-17 07:55:21.553925 | orchestrator | | b9825b5a0fcc47f599d0a336ce54069d | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-17 07:55:21.553937 | orchestrator | | d28b4d4ff2e04ba1a773bb4ed3405bc6 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-17 07:55:21.553956 | orchestrator | | d587aee4ad3249878471ee0bf4d95100 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-17 07:55:21.553970 | orchestrator | | d95477de5e334bf7896cc2636ba80625 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-17 07:55:21.553982 | orchestrator | | dc7c8612fb3a4699ad796aeacab78c55 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-17 07:55:21.553994 | orchestrator | | eb39c18aaff24e64b769828479bffa00 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-17 07:55:21.554006 | orchestrator | | f520a2f51f5b478c87c5ac9d924a9408 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-17 07:55:21.554108 | orchestrator | | f71f348ee24f4754b324e54f9c6af3c4 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-17 07:55:21.554125 | orchestrator | | f7b2619542e44d6f8dc9924676ce5505 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-17 07:55:21.554137 | orchestrator | | ff4a455cdf41437cbf83b5d79899d623 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-17 07:55:21.554149 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 07:55:21.807464 | orchestrator | 2026-04-17 07:55:21.807541 | orchestrator | # Cinder 2026-04-17 07:55:21.807551 | orchestrator | 2026-04-17 07:55:21.807559 | orchestrator | + echo 2026-04-17 07:55:21.807567 | orchestrator | + echo '# Cinder' 2026-04-17 07:55:21.807575 | orchestrator | + echo 2026-04-17 07:55:21.807582 | orchestrator | + openstack volume service list 2026-04-17 07:55:24.496364 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 07:55:24.496498 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-17 07:55:24.496515 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 07:55:24.496527 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-17T07:55:17.000000 | 2026-04-17 07:55:24.496538 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-17T07:55:17.000000 | 2026-04-17 07:55:24.496549 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-17T07:55:17.000000 | 2026-04-17 07:55:24.496560 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-17T07:55:23.000000 | 2026-04-17 07:55:24.496571 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-17T07:55:23.000000 | 2026-04-17 07:55:24.496581 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-17T07:55:16.000000 | 2026-04-17 07:55:24.496592 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-17T07:55:18.000000 | 2026-04-17 07:55:24.496603 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-17T07:55:19.000000 | 2026-04-17 07:55:24.496614 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-17T07:55:22.000000 | 2026-04-17 07:55:24.496672 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 07:55:24.767792 | orchestrator | 2026-04-17 07:55:24.767881 | orchestrator | # Neutron 2026-04-17 07:55:24.767894 | orchestrator | 2026-04-17 07:55:24.767903 | orchestrator | + echo 2026-04-17 07:55:24.767912 | orchestrator | + echo '# Neutron' 2026-04-17 07:55:24.767921 | orchestrator | + echo 2026-04-17 07:55:24.767952 | orchestrator | + openstack network agent list 2026-04-17 07:55:27.572377 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 07:55:27.572479 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-17 07:55:27.572514 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 07:55:27.572526 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-17 07:55:27.572538 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-17 07:55:27.572549 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 07:55:27.572560 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-17 07:55:27.572571 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-17 07:55:27.572582 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-17 07:55:27.572593 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 07:55:27.572604 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-17 07:55:27.572614 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 07:55:27.572675 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 07:55:27.877826 | orchestrator | + openstack network service provider list 2026-04-17 07:55:30.423812 | orchestrator | +---------------+------+---------+ 2026-04-17 07:55:30.423921 | orchestrator | | Service Type | Name | Default | 2026-04-17 07:55:30.423935 | orchestrator | +---------------+------+---------+ 2026-04-17 07:55:30.423947 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-17 07:55:30.423957 | orchestrator | +---------------+------+---------+ 2026-04-17 07:55:30.719694 | orchestrator | 2026-04-17 07:55:30.719794 | orchestrator | # Nova 2026-04-17 07:55:30.719809 | orchestrator | 2026-04-17 07:55:30.719820 | orchestrator | + echo 2026-04-17 07:55:30.719831 | orchestrator | + echo '# Nova' 2026-04-17 07:55:30.719843 | orchestrator | + echo 2026-04-17 07:55:30.719858 | orchestrator | + openstack compute service list 2026-04-17 07:55:34.138136 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 07:55:34.138237 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-17 07:55:34.138259 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 07:55:34.138277 | orchestrator | | bd33767f-2e7f-4a33-a23d-c7fa5a79ff1c | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-17T07:55:24.000000 | 2026-04-17 07:55:34.138295 | orchestrator | | aa03edaa-dd3f-4638-a4af-92ca70e27997 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-17T07:55:24.000000 | 2026-04-17 07:55:34.138313 | orchestrator | | 833a183d-d5da-4d1b-a421-5036be2ccc37 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-17T07:55:24.000000 | 2026-04-17 07:55:34.138365 | orchestrator | | 2fd01797-ef71-41f7-8242-1d73336183c1 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-17T07:55:33.000000 | 2026-04-17 07:55:34.138383 | orchestrator | | 1c17a92b-5501-4b06-89c0-b66434d590b5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-17T07:55:26.000000 | 2026-04-17 07:55:34.138401 | orchestrator | | 97b54230-d60a-4517-b42e-7293226832a8 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-17T07:55:26.000000 | 2026-04-17 07:55:34.138417 | orchestrator | | 6c11e930-75d2-4523-9663-e8e7cf357df9 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-17T07:55:32.000000 | 2026-04-17 07:55:34.138433 | orchestrator | | 17676f97-c4fe-4882-9d5e-d44a0c486786 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-17T07:55:25.000000 | 2026-04-17 07:55:34.138451 | orchestrator | | a7138f0f-cddc-486a-ac64-e457b9b04bbe | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-17T07:55:27.000000 | 2026-04-17 07:55:34.138470 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 07:55:34.418176 | orchestrator | + openstack hypervisor list 2026-04-17 07:55:37.060032 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 07:55:37.060161 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-17 07:55:37.060180 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 07:55:37.060191 | orchestrator | | 42750323-5f87-49a8-81e3-c06816c96743 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-17 07:55:37.060219 | orchestrator | | 9c34c48a-d7a0-4cfe-9b8a-4ec5b04163f3 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-17 07:55:37.060229 | orchestrator | | 06c80643-ac2f-4c9c-819b-81d84b49467f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-17 07:55:37.060239 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 07:55:37.331038 | orchestrator | + echo 2026-04-17 07:55:37.331517 | orchestrator | 2026-04-17 07:55:37.331547 | orchestrator | # Run OpenStack test play 2026-04-17 07:55:37.331560 | orchestrator | 2026-04-17 07:55:37.331572 | orchestrator | + echo '# Run OpenStack test play' 2026-04-17 07:55:37.331584 | orchestrator | + echo 2026-04-17 07:55:37.331595 | orchestrator | + osism apply --environment openstack test 2026-04-17 07:55:38.710283 | orchestrator | 2026-04-17 07:55:38 | INFO  | Trying to run play test in environment openstack 2026-04-17 07:55:48.756014 | orchestrator | 2026-04-17 07:55:48 | INFO  | Prepare task for execution of test. 2026-04-17 07:55:48.839441 | orchestrator | 2026-04-17 07:55:48 | INFO  | Task 9ebc4fa0-d851-418e-8a1d-5061f03740dc (test) was prepared for execution. 2026-04-17 07:55:48.839526 | orchestrator | 2026-04-17 07:55:48 | INFO  | It takes a moment until task 9ebc4fa0-d851-418e-8a1d-5061f03740dc (test) has been started and output is visible here. 2026-04-17 07:58:26.933847 | orchestrator | 2026-04-17 07:58:26.933951 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-17 07:58:26.933966 | orchestrator | 2026-04-17 07:58:26.933976 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-17 07:58:26.933985 | orchestrator | Friday 17 April 2026 07:55:53 +0000 (0:00:01.463) 0:00:01.463 ********** 2026-04-17 07:58:26.933994 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934004 | orchestrator | 2026-04-17 07:58:26.934013 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-17 07:58:26.934075 | orchestrator | Friday 17 April 2026 07:56:00 +0000 (0:00:06.105) 0:00:07.569 ********** 2026-04-17 07:58:26.934084 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934093 | orchestrator | 2026-04-17 07:58:26.934102 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-17 07:58:26.934111 | orchestrator | Friday 17 April 2026 07:56:05 +0000 (0:00:05.072) 0:00:12.641 ********** 2026-04-17 07:58:26.934170 | orchestrator | changed: [localhost] 2026-04-17 07:58:26.934181 | orchestrator | 2026-04-17 07:58:26.934190 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-17 07:58:26.934199 | orchestrator | Friday 17 April 2026 07:56:14 +0000 (0:00:09.287) 0:00:21.929 ********** 2026-04-17 07:58:26.934208 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934217 | orchestrator | 2026-04-17 07:58:26.934226 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-17 07:58:26.934235 | orchestrator | Friday 17 April 2026 07:56:19 +0000 (0:00:05.127) 0:00:27.056 ********** 2026-04-17 07:58:26.934243 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934252 | orchestrator | 2026-04-17 07:58:26.934261 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-17 07:58:26.934270 | orchestrator | Friday 17 April 2026 07:56:24 +0000 (0:00:05.035) 0:00:32.091 ********** 2026-04-17 07:58:26.934279 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-17 07:58:26.934288 | orchestrator | ok: [localhost] => (item=member) 2026-04-17 07:58:26.934298 | orchestrator | changed: [localhost] => (item=creator) 2026-04-17 07:58:26.934307 | orchestrator | 2026-04-17 07:58:26.934316 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-17 07:58:26.934325 | orchestrator | Friday 17 April 2026 07:56:37 +0000 (0:00:13.140) 0:00:45.231 ********** 2026-04-17 07:58:26.934333 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934342 | orchestrator | 2026-04-17 07:58:26.934351 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-17 07:58:26.934359 | orchestrator | Friday 17 April 2026 07:56:43 +0000 (0:00:05.484) 0:00:50.716 ********** 2026-04-17 07:58:26.934368 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934377 | orchestrator | 2026-04-17 07:58:26.934385 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-17 07:58:26.934394 | orchestrator | Friday 17 April 2026 07:56:48 +0000 (0:00:05.328) 0:00:56.045 ********** 2026-04-17 07:58:26.934404 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934414 | orchestrator | 2026-04-17 07:58:26.934424 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-17 07:58:26.934434 | orchestrator | Friday 17 April 2026 07:56:53 +0000 (0:00:05.270) 0:01:01.315 ********** 2026-04-17 07:58:26.934444 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934453 | orchestrator | 2026-04-17 07:58:26.934463 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-17 07:58:26.934473 | orchestrator | Friday 17 April 2026 07:56:58 +0000 (0:00:05.001) 0:01:06.317 ********** 2026-04-17 07:58:26.934483 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934493 | orchestrator | 2026-04-17 07:58:26.934503 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-17 07:58:26.934513 | orchestrator | Friday 17 April 2026 07:57:03 +0000 (0:00:04.818) 0:01:11.136 ********** 2026-04-17 07:58:26.934522 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934532 | orchestrator | 2026-04-17 07:58:26.934542 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-17 07:58:26.934552 | orchestrator | Friday 17 April 2026 07:57:08 +0000 (0:00:05.015) 0:01:16.151 ********** 2026-04-17 07:58:26.934562 | orchestrator | ok: [localhost] => (item={'name': 'test-1'}) 2026-04-17 07:58:26.934573 | orchestrator | ok: [localhost] => (item={'name': 'test-2'}) 2026-04-17 07:58:26.934602 | orchestrator | ok: [localhost] => (item={'name': 'test-3'}) 2026-04-17 07:58:26.934613 | orchestrator | 2026-04-17 07:58:26.934622 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-17 07:58:26.934633 | orchestrator | Friday 17 April 2026 07:57:21 +0000 (0:00:12.562) 0:01:28.714 ********** 2026-04-17 07:58:26.934643 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-17 07:58:26.934654 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-17 07:58:26.934663 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-17 07:58:26.934679 | orchestrator | 2026-04-17 07:58:26.934688 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-17 07:58:26.934697 | orchestrator | Friday 17 April 2026 07:57:34 +0000 (0:00:12.900) 0:01:41.615 ********** 2026-04-17 07:58:26.934706 | orchestrator | ok: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-17 07:58:26.934715 | orchestrator | ok: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-17 07:58:26.934723 | orchestrator | ok: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-17 07:58:26.934732 | orchestrator | 2026-04-17 07:58:26.934740 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-17 07:58:26.934749 | orchestrator | 2026-04-17 07:58:26.934758 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-17 07:58:26.934766 | orchestrator | Friday 17 April 2026 07:57:49 +0000 (0:00:15.369) 0:01:56.985 ********** 2026-04-17 07:58:26.934775 | orchestrator | ok: [localhost] 2026-04-17 07:58:26.934783 | orchestrator | 2026-04-17 07:58:26.934807 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-17 07:58:26.934816 | orchestrator | Friday 17 April 2026 07:57:54 +0000 (0:00:04.972) 0:02:01.957 ********** 2026-04-17 07:58:26.934825 | orchestrator | skipping: [localhost] 2026-04-17 07:58:26.934833 | orchestrator | 2026-04-17 07:58:26.934842 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-17 07:58:26.934850 | orchestrator | Friday 17 April 2026 07:57:55 +0000 (0:00:01.177) 0:02:03.135 ********** 2026-04-17 07:58:26.934859 | orchestrator | skipping: [localhost] 2026-04-17 07:58:26.934868 | orchestrator | 2026-04-17 07:58:26.934876 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-17 07:58:26.934885 | orchestrator | Friday 17 April 2026 07:57:56 +0000 (0:00:01.170) 0:02:04.305 ********** 2026-04-17 07:58:26.934893 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-17 07:58:26.934902 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-17 07:58:26.934910 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-17 07:58:26.934919 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-17 07:58:26.934942 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-17 07:58:26.934951 | orchestrator | skipping: [localhost] 2026-04-17 07:58:26.934959 | orchestrator | 2026-04-17 07:58:26.934968 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-17 07:58:26.934977 | orchestrator | Friday 17 April 2026 07:57:58 +0000 (0:00:01.320) 0:02:05.626 ********** 2026-04-17 07:58:26.934985 | orchestrator | skipping: [localhost] 2026-04-17 07:58:26.934994 | orchestrator | 2026-04-17 07:58:26.935002 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-17 07:58:26.935011 | orchestrator | Friday 17 April 2026 07:57:59 +0000 (0:00:01.259) 0:02:06.886 ********** 2026-04-17 07:58:26.935019 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 07:58:26.935050 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 07:58:26.935069 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 07:58:26.935088 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 07:58:26.935098 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 07:58:26.935106 | orchestrator | 2026-04-17 07:58:26.935115 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-17 07:58:26.935124 | orchestrator | Friday 17 April 2026 07:58:05 +0000 (0:00:05.858) 0:02:12.744 ********** 2026-04-17 07:58:26.935132 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-17 07:58:26.935149 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j159568027478.3574', 'results_file': '/ansible/.ansible_async/j159568027478.3574', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:58:26.935161 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j272820626647.3599', 'results_file': '/ansible/.ansible_async/j272820626647.3599', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:58:26.935170 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j297868054959.3624', 'results_file': '/ansible/.ansible_async/j297868054959.3624', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:58:26.935179 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j550405306738.3649', 'results_file': '/ansible/.ansible_async/j550405306738.3649', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:58:26.935192 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j882647952649.3674', 'results_file': '/ansible/.ansible_async/j882647952649.3674', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:58:26.935201 | orchestrator | 2026-04-17 07:58:26.935210 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-17 07:58:26.935219 | orchestrator | Friday 17 April 2026 07:58:21 +0000 (0:00:15.922) 0:02:28.666 ********** 2026-04-17 07:58:26.935228 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 07:58:26.935237 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 07:58:26.935246 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 07:58:26.935254 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 07:58:26.935263 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 07:58:26.935272 | orchestrator | 2026-04-17 07:58:26.935281 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-17 07:58:26.935294 | orchestrator | Friday 17 April 2026 07:58:26 +0000 (0:00:05.793) 0:02:34.460 ********** 2026-04-17 07:59:27.547294 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j436984338555.3745', 'results_file': '/ansible/.ansible_async/j436984338555.3745', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547409 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j896851060739.3770', 'results_file': '/ansible/.ansible_async/j896851060739.3770', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547426 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j192201552984.3795', 'results_file': '/ansible/.ansible_async/j192201552984.3795', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547439 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j128812980895.3820', 'results_file': '/ansible/.ansible_async/j128812980895.3820', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547450 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j12358094896.3845', 'results_file': '/ansible/.ansible_async/j12358094896.3845', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547488 | orchestrator | 2026-04-17 07:59:27.547501 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-17 07:59:27.547513 | orchestrator | Friday 17 April 2026 07:58:31 +0000 (0:00:04.528) 0:02:38.989 ********** 2026-04-17 07:59:27.547524 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 07:59:27.547535 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 07:59:27.547546 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 07:59:27.547557 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 07:59:27.547612 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 07:59:27.547624 | orchestrator | 2026-04-17 07:59:27.547635 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-17 07:59:27.547646 | orchestrator | Friday 17 April 2026 07:58:37 +0000 (0:00:05.579) 0:02:44.568 ********** 2026-04-17 07:59:27.547656 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-17 07:59:27.547668 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j478166330400.3917', 'results_file': '/ansible/.ansible_async/j478166330400.3917', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547680 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j662458757863.3942', 'results_file': '/ansible/.ansible_async/j662458757863.3942', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547692 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j946252705777.3968', 'results_file': '/ansible/.ansible_async/j946252705777.3968', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547719 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j27404124955.3994', 'results_file': '/ansible/.ansible_async/j27404124955.3994', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547731 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j11393225974.4020', 'results_file': '/ansible/.ansible_async/j11393225974.4020', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 07:59:27.547741 | orchestrator | 2026-04-17 07:59:27.547752 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-17 07:59:27.547763 | orchestrator | Friday 17 April 2026 07:58:48 +0000 (0:00:11.547) 0:02:56.116 ********** 2026-04-17 07:59:27.547774 | orchestrator | ok: [localhost] 2026-04-17 07:59:27.547786 | orchestrator | 2026-04-17 07:59:27.547797 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-17 07:59:27.547808 | orchestrator | Friday 17 April 2026 07:58:53 +0000 (0:00:05.200) 0:03:01.316 ********** 2026-04-17 07:59:27.547818 | orchestrator | ok: [localhost] 2026-04-17 07:59:27.547831 | orchestrator | 2026-04-17 07:59:27.547843 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-17 07:59:27.547873 | orchestrator | Friday 17 April 2026 07:58:59 +0000 (0:00:06.035) 0:03:07.352 ********** 2026-04-17 07:59:27.547886 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 07:59:27.547899 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 07:59:27.547911 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 07:59:27.547924 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 07:59:27.547936 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 07:59:27.547958 | orchestrator | 2026-04-17 07:59:27.547971 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-17 07:59:27.547983 | orchestrator | Friday 17 April 2026 07:59:25 +0000 (0:00:25.856) 0:03:33.209 ********** 2026-04-17 07:59:27.547996 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-17 07:59:27.548008 | orchestrator |  "msg": "test: 192.168.112.133" 2026-04-17 07:59:27.548020 | orchestrator | } 2026-04-17 07:59:27.548033 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-17 07:59:27.548045 | orchestrator |  "msg": "test-1: 192.168.112.109" 2026-04-17 07:59:27.548057 | orchestrator | } 2026-04-17 07:59:27.548070 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-17 07:59:27.548083 | orchestrator |  "msg": "test-2: 192.168.112.153" 2026-04-17 07:59:27.548095 | orchestrator | } 2026-04-17 07:59:27.548109 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-17 07:59:27.548121 | orchestrator |  "msg": "test-3: 192.168.112.122" 2026-04-17 07:59:27.548133 | orchestrator | } 2026-04-17 07:59:27.548145 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-17 07:59:27.548158 | orchestrator |  "msg": "test-4: 192.168.112.193" 2026-04-17 07:59:27.548171 | orchestrator | } 2026-04-17 07:59:27.548183 | orchestrator | 2026-04-17 07:59:27.548194 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 07:59:27.548206 | orchestrator | localhost : ok=26  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 07:59:27.548217 | orchestrator | 2026-04-17 07:59:27.548228 | orchestrator | 2026-04-17 07:59:27.548239 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 07:59:27.548250 | orchestrator | Friday 17 April 2026 07:59:27 +0000 (0:00:01.592) 0:03:34.801 ********** 2026-04-17 07:59:27.548261 | orchestrator | =============================================================================== 2026-04-17 07:59:27.548271 | orchestrator | Create floating ip addresses ------------------------------------------- 25.86s 2026-04-17 07:59:27.548282 | orchestrator | Wait for instance creation to complete --------------------------------- 15.92s 2026-04-17 07:59:27.548293 | orchestrator | Create test routers ---------------------------------------------------- 15.37s 2026-04-17 07:59:27.548304 | orchestrator | Add member roles to user test ------------------------------------------ 13.14s 2026-04-17 07:59:27.548315 | orchestrator | Create test subnets ---------------------------------------------------- 12.90s 2026-04-17 07:59:27.548325 | orchestrator | Create test networks --------------------------------------------------- 12.56s 2026-04-17 07:59:27.548336 | orchestrator | Wait for tags to be added ---------------------------------------------- 11.55s 2026-04-17 07:59:27.548347 | orchestrator | Add manager role to user test-admin ------------------------------------- 9.29s 2026-04-17 07:59:27.548358 | orchestrator | Create test domain ------------------------------------------------------ 6.11s 2026-04-17 07:59:27.548369 | orchestrator | Attach test volume ------------------------------------------------------ 6.04s 2026-04-17 07:59:27.548379 | orchestrator | Create test instances --------------------------------------------------- 5.86s 2026-04-17 07:59:27.548390 | orchestrator | Add metadata to instances ----------------------------------------------- 5.79s 2026-04-17 07:59:27.548401 | orchestrator | Add tag to instances ---------------------------------------------------- 5.58s 2026-04-17 07:59:27.548411 | orchestrator | Create test server group ------------------------------------------------ 5.48s 2026-04-17 07:59:27.548422 | orchestrator | Create ssh security group ----------------------------------------------- 5.33s 2026-04-17 07:59:27.548433 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.27s 2026-04-17 07:59:27.548444 | orchestrator | Create test volume ------------------------------------------------------ 5.20s 2026-04-17 07:59:27.548455 | orchestrator | Create test project ----------------------------------------------------- 5.13s 2026-04-17 07:59:27.548465 | orchestrator | Create test-admin user -------------------------------------------------- 5.07s 2026-04-17 07:59:27.548488 | orchestrator | Create test user -------------------------------------------------------- 5.04s 2026-04-17 07:59:27.738863 | orchestrator | + server_list 2026-04-17 07:59:27.738954 | orchestrator | + openstack --os-cloud test server list 2026-04-17 07:59:31.493288 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 07:59:31.493395 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-17 07:59:31.493410 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 07:59:31.493422 | orchestrator | | ade52472-1613-40d8-aa9f-b5a0cc6f0d77 | test-4 | ACTIVE | test-3=192.168.112.193, 192.168.202.47 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 07:59:31.493433 | orchestrator | | 10df8f62-29ae-4e8f-92d4-b78dfab79c06 | test-2 | ACTIVE | test-2=192.168.112.153, 192.168.201.130 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 07:59:31.493444 | orchestrator | | 42216a94-69f4-42ce-a51a-18de589b5980 | test-1 | ACTIVE | test-1=192.168.112.109, 192.168.200.11 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 07:59:31.493455 | orchestrator | | 49af2b6b-55c5-42f8-bf30-a81aa8a4e60b | test-3 | ACTIVE | test-2=192.168.112.122, 192.168.201.105 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 07:59:31.493466 | orchestrator | | b8742be6-41cf-41ee-8066-4d495a4e1434 | test | ACTIVE | test-1=192.168.112.133, 192.168.200.69 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 07:59:31.493477 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 07:59:31.795762 | orchestrator | + openstack --os-cloud test server show test 2026-04-17 07:59:34.999039 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:34.999161 | orchestrator | | Field | Value | 2026-04-17 07:59:34.999178 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:34.999191 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 07:59:34.999203 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 07:59:34.999215 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 07:59:34.999252 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-17 07:59:34.999266 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 07:59:34.999278 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 07:59:34.999309 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 07:59:34.999322 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 07:59:34.999334 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 07:59:34.999345 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 07:59:34.999357 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 07:59:34.999369 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 07:59:34.999387 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 07:59:34.999403 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 07:59:34.999415 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 07:59:34.999427 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:04.000000 | 2026-04-17 07:59:34.999446 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 07:59:34.999458 | orchestrator | | accessIPv4 | | 2026-04-17 07:59:34.999470 | orchestrator | | accessIPv6 | | 2026-04-17 07:59:34.999481 | orchestrator | | addresses | test-1=192.168.112.133, 192.168.200.69 | 2026-04-17 07:59:34.999493 | orchestrator | | config_drive | | 2026-04-17 07:59:34.999510 | orchestrator | | created | 2026-04-17T05:06:38Z | 2026-04-17 07:59:34.999522 | orchestrator | | description | None | 2026-04-17 07:59:34.999537 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 07:59:34.999549 | orchestrator | | hostId | 61880b69f50018901e2e10e887010d9cd89861e535470105893d2ad2 | 2026-04-17 07:59:34.999560 | orchestrator | | host_status | None | 2026-04-17 07:59:34.999606 | orchestrator | | id | b8742be6-41cf-41ee-8066-4d495a4e1434 | 2026-04-17 07:59:34.999618 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 07:59:34.999629 | orchestrator | | key_name | test | 2026-04-17 07:59:34.999640 | orchestrator | | locked | False | 2026-04-17 07:59:34.999658 | orchestrator | | locked_reason | None | 2026-04-17 07:59:34.999669 | orchestrator | | name | test | 2026-04-17 07:59:34.999681 | orchestrator | | pinned_availability_zone | None | 2026-04-17 07:59:35.000032 | orchestrator | | progress | 0 | 2026-04-17 07:59:35.000047 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 07:59:35.000059 | orchestrator | | properties | hostname='test' | 2026-04-17 07:59:35.000079 | orchestrator | | security_groups | name='icmp' | 2026-04-17 07:59:35.000091 | orchestrator | | | name='ssh' | 2026-04-17 07:59:35.000102 | orchestrator | | server_groups | None | 2026-04-17 07:59:35.000114 | orchestrator | | status | ACTIVE | 2026-04-17 07:59:35.000142 | orchestrator | | tags | test | 2026-04-17 07:59:35.000153 | orchestrator | | trusted_image_certificates | None | 2026-04-17 07:59:35.000165 | orchestrator | | updated | 2026-04-17T07:58:27Z | 2026-04-17 07:59:35.000176 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 07:59:35.000188 | orchestrator | | volumes_attached | delete_on_termination='True', id='8053a2c6-4ff5-4dc7-9b75-b7bf147b4af2' | 2026-04-17 07:59:35.000199 | orchestrator | | | delete_on_termination='False', id='3ae2f902-71fb-4320-883a-20f3ed544819' | 2026-04-17 07:59:35.002313 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:35.294738 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-17 07:59:38.238336 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:38.238421 | orchestrator | | Field | Value | 2026-04-17 07:59:38.238452 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:38.238474 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 07:59:38.238483 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 07:59:38.238491 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 07:59:38.238499 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-17 07:59:38.238507 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 07:59:38.238515 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 07:59:38.238537 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 07:59:38.238546 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 07:59:38.238613 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 07:59:38.238623 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 07:59:38.238636 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 07:59:38.238644 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 07:59:38.238652 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 07:59:38.238660 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 07:59:38.238668 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 07:59:38.238676 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:04.000000 | 2026-04-17 07:59:38.238689 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 07:59:38.238703 | orchestrator | | accessIPv4 | | 2026-04-17 07:59:38.238711 | orchestrator | | accessIPv6 | | 2026-04-17 07:59:38.238723 | orchestrator | | addresses | test-1=192.168.112.109, 192.168.200.11 | 2026-04-17 07:59:38.238731 | orchestrator | | config_drive | | 2026-04-17 07:59:38.238739 | orchestrator | | created | 2026-04-17T05:06:39Z | 2026-04-17 07:59:38.238747 | orchestrator | | description | None | 2026-04-17 07:59:38.238755 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 07:59:38.238763 | orchestrator | | hostId | 61880b69f50018901e2e10e887010d9cd89861e535470105893d2ad2 | 2026-04-17 07:59:38.238771 | orchestrator | | host_status | None | 2026-04-17 07:59:38.238790 | orchestrator | | id | 42216a94-69f4-42ce-a51a-18de589b5980 | 2026-04-17 07:59:38.238798 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 07:59:38.238806 | orchestrator | | key_name | test | 2026-04-17 07:59:38.238819 | orchestrator | | locked | False | 2026-04-17 07:59:38.238827 | orchestrator | | locked_reason | None | 2026-04-17 07:59:38.238835 | orchestrator | | name | test-1 | 2026-04-17 07:59:38.238843 | orchestrator | | pinned_availability_zone | None | 2026-04-17 07:59:38.238851 | orchestrator | | progress | 0 | 2026-04-17 07:59:38.238859 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 07:59:38.238866 | orchestrator | | properties | hostname='test-1' | 2026-04-17 07:59:38.238885 | orchestrator | | security_groups | name='icmp' | 2026-04-17 07:59:38.238895 | orchestrator | | | name='ssh' | 2026-04-17 07:59:38.238904 | orchestrator | | server_groups | None | 2026-04-17 07:59:38.238917 | orchestrator | | status | ACTIVE | 2026-04-17 07:59:38.238927 | orchestrator | | tags | test | 2026-04-17 07:59:38.238936 | orchestrator | | trusted_image_certificates | None | 2026-04-17 07:59:38.238945 | orchestrator | | updated | 2026-04-17T07:58:27Z | 2026-04-17 07:59:38.238955 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 07:59:38.238964 | orchestrator | | volumes_attached | delete_on_termination='True', id='16b0605e-ac56-49f1-a4b1-5a304824d63e' | 2026-04-17 07:59:38.241902 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:38.558877 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-17 07:59:41.448499 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:41.448652 | orchestrator | | Field | Value | 2026-04-17 07:59:41.448669 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:41.448679 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 07:59:41.448689 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 07:59:41.448698 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 07:59:41.448707 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-17 07:59:41.448716 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 07:59:41.448747 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 07:59:41.448773 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 07:59:41.448783 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 07:59:41.448857 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 07:59:41.448873 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 07:59:41.448886 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 07:59:41.448896 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 07:59:41.448905 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 07:59:41.448914 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 07:59:41.448929 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 07:59:41.448938 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:04.000000 | 2026-04-17 07:59:41.448953 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 07:59:41.448963 | orchestrator | | accessIPv4 | | 2026-04-17 07:59:41.448972 | orchestrator | | accessIPv6 | | 2026-04-17 07:59:41.448981 | orchestrator | | addresses | test-2=192.168.112.153, 192.168.201.130 | 2026-04-17 07:59:41.448994 | orchestrator | | config_drive | | 2026-04-17 07:59:41.449011 | orchestrator | | created | 2026-04-17T05:06:39Z | 2026-04-17 07:59:41.449026 | orchestrator | | description | None | 2026-04-17 07:59:41.449049 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 07:59:41.449066 | orchestrator | | hostId | 9a6fc80b5ef5804c541967b849f5345d94f707e64997816cdf8509da | 2026-04-17 07:59:41.449083 | orchestrator | | host_status | None | 2026-04-17 07:59:41.449102 | orchestrator | | id | 10df8f62-29ae-4e8f-92d4-b78dfab79c06 | 2026-04-17 07:59:41.449112 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 07:59:41.449122 | orchestrator | | key_name | test | 2026-04-17 07:59:41.449241 | orchestrator | | locked | False | 2026-04-17 07:59:41.449258 | orchestrator | | locked_reason | None | 2026-04-17 07:59:41.449268 | orchestrator | | name | test-2 | 2026-04-17 07:59:41.449279 | orchestrator | | pinned_availability_zone | None | 2026-04-17 07:59:41.449296 | orchestrator | | progress | 0 | 2026-04-17 07:59:41.449306 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 07:59:41.449317 | orchestrator | | properties | hostname='test-2' | 2026-04-17 07:59:41.449335 | orchestrator | | security_groups | name='icmp' | 2026-04-17 07:59:41.449345 | orchestrator | | | name='ssh' | 2026-04-17 07:59:41.449355 | orchestrator | | server_groups | None | 2026-04-17 07:59:41.449366 | orchestrator | | status | ACTIVE | 2026-04-17 07:59:41.449381 | orchestrator | | tags | test | 2026-04-17 07:59:41.449391 | orchestrator | | trusted_image_certificates | None | 2026-04-17 07:59:41.449407 | orchestrator | | updated | 2026-04-17T07:58:28Z | 2026-04-17 07:59:41.449416 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 07:59:41.449425 | orchestrator | | volumes_attached | delete_on_termination='True', id='578663b6-c726-4521-9fb0-d89e204fe08b' | 2026-04-17 07:59:41.449434 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:41.717688 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-17 07:59:44.583214 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:44.583342 | orchestrator | | Field | Value | 2026-04-17 07:59:44.583358 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:44.583388 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 07:59:44.583400 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 07:59:44.583432 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 07:59:44.583444 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-17 07:59:44.583455 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 07:59:44.583467 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 07:59:44.583521 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 07:59:44.583535 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 07:59:44.583546 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 07:59:44.583557 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 07:59:44.583640 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 07:59:44.583662 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 07:59:44.583673 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 07:59:44.583684 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 07:59:44.583695 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 07:59:44.583707 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:04.000000 | 2026-04-17 07:59:44.583726 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 07:59:44.583738 | orchestrator | | accessIPv4 | | 2026-04-17 07:59:44.583749 | orchestrator | | accessIPv6 | | 2026-04-17 07:59:44.583760 | orchestrator | | addresses | test-2=192.168.112.122, 192.168.201.105 | 2026-04-17 07:59:44.583776 | orchestrator | | config_drive | | 2026-04-17 07:59:44.583802 | orchestrator | | created | 2026-04-17T05:06:39Z | 2026-04-17 07:59:44.583814 | orchestrator | | description | None | 2026-04-17 07:59:44.583825 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 07:59:44.583836 | orchestrator | | hostId | 9a6fc80b5ef5804c541967b849f5345d94f707e64997816cdf8509da | 2026-04-17 07:59:44.583847 | orchestrator | | host_status | None | 2026-04-17 07:59:44.583866 | orchestrator | | id | 49af2b6b-55c5-42f8-bf30-a81aa8a4e60b | 2026-04-17 07:59:44.583878 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 07:59:44.583889 | orchestrator | | key_name | test | 2026-04-17 07:59:44.583900 | orchestrator | | locked | False | 2026-04-17 07:59:44.583922 | orchestrator | | locked_reason | None | 2026-04-17 07:59:44.583933 | orchestrator | | name | test-3 | 2026-04-17 07:59:44.583945 | orchestrator | | pinned_availability_zone | None | 2026-04-17 07:59:44.583956 | orchestrator | | progress | 0 | 2026-04-17 07:59:44.583967 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 07:59:44.583978 | orchestrator | | properties | hostname='test-3' | 2026-04-17 07:59:44.583995 | orchestrator | | security_groups | name='icmp' | 2026-04-17 07:59:44.584007 | orchestrator | | | name='ssh' | 2026-04-17 07:59:44.584018 | orchestrator | | server_groups | None | 2026-04-17 07:59:44.584039 | orchestrator | | status | ACTIVE | 2026-04-17 07:59:44.584051 | orchestrator | | tags | test | 2026-04-17 07:59:44.584062 | orchestrator | | trusted_image_certificates | None | 2026-04-17 07:59:44.584105 | orchestrator | | updated | 2026-04-17T07:58:29Z | 2026-04-17 07:59:44.584117 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 07:59:44.584129 | orchestrator | | volumes_attached | delete_on_termination='True', id='21bca45f-c565-4dce-8d3d-7605cd53b89e' | 2026-04-17 07:59:44.587541 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:44.856603 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-17 07:59:47.833069 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:47.833171 | orchestrator | | Field | Value | 2026-04-17 07:59:47.833212 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:47.833224 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 07:59:47.833248 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 07:59:47.833259 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 07:59:47.833269 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-17 07:59:47.833280 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 07:59:47.833290 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 07:59:47.833319 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 07:59:47.833330 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 07:59:47.833347 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 07:59:47.833357 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 07:59:47.833367 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 07:59:47.833382 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 07:59:47.833392 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 07:59:47.833402 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 07:59:47.833412 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 07:59:47.833422 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T05:07:06.000000 | 2026-04-17 07:59:47.833438 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 07:59:47.833449 | orchestrator | | accessIPv4 | | 2026-04-17 07:59:47.833465 | orchestrator | | accessIPv6 | | 2026-04-17 07:59:47.833476 | orchestrator | | addresses | test-3=192.168.112.193, 192.168.202.47 | 2026-04-17 07:59:47.833490 | orchestrator | | config_drive | | 2026-04-17 07:59:47.833500 | orchestrator | | created | 2026-04-17T05:06:41Z | 2026-04-17 07:59:47.833511 | orchestrator | | description | None | 2026-04-17 07:59:47.833522 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 07:59:47.833532 | orchestrator | | hostId | 9a6fc80b5ef5804c541967b849f5345d94f707e64997816cdf8509da | 2026-04-17 07:59:47.833542 | orchestrator | | host_status | None | 2026-04-17 07:59:47.833592 | orchestrator | | id | ade52472-1613-40d8-aa9f-b5a0cc6f0d77 | 2026-04-17 07:59:47.833613 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 07:59:47.833625 | orchestrator | | key_name | test | 2026-04-17 07:59:47.833636 | orchestrator | | locked | False | 2026-04-17 07:59:47.833652 | orchestrator | | locked_reason | None | 2026-04-17 07:59:47.833663 | orchestrator | | name | test-4 | 2026-04-17 07:59:47.833675 | orchestrator | | pinned_availability_zone | None | 2026-04-17 07:59:47.833687 | orchestrator | | progress | 0 | 2026-04-17 07:59:47.833699 | orchestrator | | project_id | d684144c40b742af8c0edaad54fe7ba2 | 2026-04-17 07:59:47.833709 | orchestrator | | properties | hostname='test-4' | 2026-04-17 07:59:47.833734 | orchestrator | | security_groups | name='icmp' | 2026-04-17 07:59:47.833746 | orchestrator | | | name='ssh' | 2026-04-17 07:59:47.833757 | orchestrator | | server_groups | None | 2026-04-17 07:59:47.833769 | orchestrator | | status | ACTIVE | 2026-04-17 07:59:47.833785 | orchestrator | | tags | test | 2026-04-17 07:59:47.833798 | orchestrator | | trusted_image_certificates | None | 2026-04-17 07:59:47.833809 | orchestrator | | updated | 2026-04-17T07:58:30Z | 2026-04-17 07:59:47.833820 | orchestrator | | user_id | 232c7f32176f4cb293228fac19d8e2ff | 2026-04-17 07:59:47.833832 | orchestrator | | volumes_attached | delete_on_termination='True', id='f71a7bd0-47ba-449a-90f3-77f4f677840c' | 2026-04-17 07:59:47.837205 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 07:59:48.150095 | orchestrator | + server_ping 2026-04-17 07:59:48.150305 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-17 07:59:48.150415 | orchestrator | ++ tr -d '\r' 2026-04-17 07:59:51.015097 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 07:59:51.015287 | orchestrator | + ping -c3 192.168.112.193 2026-04-17 07:59:51.028318 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-17 07:59:51.028382 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=6.91 ms 2026-04-17 07:59:52.025773 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.00 ms 2026-04-17 07:59:53.027533 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.75 ms 2026-04-17 07:59:53.027700 | orchestrator | 2026-04-17 07:59:53.027719 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-17 07:59:53.027733 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 07:59:53.027744 | orchestrator | rtt min/avg/max/mdev = 1.754/3.554/6.912/2.376 ms 2026-04-17 07:59:53.027756 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 07:59:53.027768 | orchestrator | + ping -c3 192.168.112.153 2026-04-17 07:59:53.042768 | orchestrator | PING 192.168.112.153 (192.168.112.153) 56(84) bytes of data. 2026-04-17 07:59:53.042843 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=1 ttl=63 time=8.84 ms 2026-04-17 07:59:54.037693 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=2 ttl=63 time=2.09 ms 2026-04-17 07:59:55.039119 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=3 ttl=63 time=1.61 ms 2026-04-17 07:59:55.039235 | orchestrator | 2026-04-17 07:59:55.039252 | orchestrator | --- 192.168.112.153 ping statistics --- 2026-04-17 07:59:55.039266 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 07:59:55.039278 | orchestrator | rtt min/avg/max/mdev = 1.607/4.180/8.842/3.302 ms 2026-04-17 07:59:55.040010 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 07:59:55.040035 | orchestrator | + ping -c3 192.168.112.133 2026-04-17 07:59:55.052433 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-17 07:59:55.052529 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=7.82 ms 2026-04-17 07:59:56.048480 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.39 ms 2026-04-17 07:59:57.048687 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.37 ms 2026-04-17 07:59:57.048758 | orchestrator | 2026-04-17 07:59:57.048765 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-17 07:59:57.048771 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 07:59:57.048776 | orchestrator | rtt min/avg/max/mdev = 1.365/3.857/7.821/2.833 ms 2026-04-17 07:59:57.049087 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 07:59:57.049114 | orchestrator | + ping -c3 192.168.112.109 2026-04-17 07:59:57.066764 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-17 07:59:57.066786 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=8.89 ms 2026-04-17 07:59:58.060529 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=1.82 ms 2026-04-17 07:59:59.061750 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.26 ms 2026-04-17 07:59:59.061825 | orchestrator | 2026-04-17 07:59:59.061834 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-17 07:59:59.061842 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 07:59:59.061848 | orchestrator | rtt min/avg/max/mdev = 1.257/3.987/8.886/3.471 ms 2026-04-17 07:59:59.061876 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 07:59:59.061882 | orchestrator | + ping -c3 192.168.112.122 2026-04-17 07:59:59.070700 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2026-04-17 07:59:59.070785 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=6.24 ms 2026-04-17 08:00:00.067949 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=1.96 ms 2026-04-17 08:00:01.068880 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.67 ms 2026-04-17 08:00:01.068951 | orchestrator | 2026-04-17 08:00:01.068958 | orchestrator | --- 192.168.112.122 ping statistics --- 2026-04-17 08:00:01.068964 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 08:00:01.068969 | orchestrator | rtt min/avg/max/mdev = 1.672/3.288/6.235/2.087 ms 2026-04-17 08:00:01.069482 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-17 08:00:01.519240 | orchestrator | ok: Runtime: 0:09:31.714243 2026-04-17 08:00:01.627540 | 2026-04-17 08:00:01.627772 | PLAY RECAP 2026-04-17 08:00:01.627926 | orchestrator | ok: 32 changed: 13 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-17 08:00:01.628001 | 2026-04-17 08:00:01.910436 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-04-17 08:00:01.914208 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-17 08:00:02.671943 | 2026-04-17 08:00:02.672154 | PLAY [Post output play] 2026-04-17 08:00:02.690904 | 2026-04-17 08:00:02.691121 | LOOP [stage-output : Register sources] 2026-04-17 08:00:02.747658 | 2026-04-17 08:00:02.747923 | TASK [stage-output : Check sudo] 2026-04-17 08:00:03.629751 | orchestrator | sudo: a password is required 2026-04-17 08:00:03.787148 | orchestrator | ok: Runtime: 0:00:00.017638 2026-04-17 08:00:03.797739 | 2026-04-17 08:00:03.797898 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-17 08:00:03.843822 | 2026-04-17 08:00:03.844196 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-17 08:00:03.923828 | orchestrator | ok 2026-04-17 08:00:03.932978 | 2026-04-17 08:00:03.933143 | LOOP [stage-output : Ensure target folders exist] 2026-04-17 08:00:04.406716 | orchestrator | ok: "docs" 2026-04-17 08:00:04.407165 | 2026-04-17 08:00:04.639859 | orchestrator | ok: "artifacts" 2026-04-17 08:00:04.884464 | orchestrator | ok: "logs" 2026-04-17 08:00:04.904593 | 2026-04-17 08:00:04.904816 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-17 08:00:04.943042 | 2026-04-17 08:00:04.943355 | TASK [stage-output : Make all log files readable] 2026-04-17 08:00:05.245965 | orchestrator | ok 2026-04-17 08:00:05.253964 | 2026-04-17 08:00:05.254121 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-17 08:00:05.279454 | orchestrator | skipping: Conditional result was False 2026-04-17 08:00:05.296847 | 2026-04-17 08:00:05.297018 | TASK [stage-output : Discover log files for compression] 2026-04-17 08:00:05.313015 | orchestrator | skipping: Conditional result was False 2026-04-17 08:00:05.323630 | 2026-04-17 08:00:05.323753 | LOOP [stage-output : Archive everything from logs] 2026-04-17 08:00:05.370743 | 2026-04-17 08:00:05.371036 | PLAY [Post cleanup play] 2026-04-17 08:00:05.382996 | 2026-04-17 08:00:05.383138 | TASK [Set cloud fact (Zuul deployment)] 2026-04-17 08:00:05.440537 | orchestrator | ok 2026-04-17 08:00:05.454458 | 2026-04-17 08:00:05.454620 | TASK [Set cloud fact (local deployment)] 2026-04-17 08:00:05.501283 | orchestrator | skipping: Conditional result was False 2026-04-17 08:00:05.511442 | 2026-04-17 08:00:05.511566 | TASK [Clean the cloud environment] 2026-04-17 08:00:06.113459 | orchestrator | 2026-04-17 08:00:06 - clean up servers 2026-04-17 08:00:06.933326 | orchestrator | 2026-04-17 08:00:06 - testbed-manager 2026-04-17 08:00:07.014179 | orchestrator | 2026-04-17 08:00:07 - testbed-node-4 2026-04-17 08:00:07.095754 | orchestrator | 2026-04-17 08:00:07 - testbed-node-3 2026-04-17 08:00:07.191993 | orchestrator | 2026-04-17 08:00:07 - testbed-node-2 2026-04-17 08:00:07.283715 | orchestrator | 2026-04-17 08:00:07 - testbed-node-0 2026-04-17 08:00:07.377805 | orchestrator | 2026-04-17 08:00:07 - testbed-node-5 2026-04-17 08:00:07.478468 | orchestrator | 2026-04-17 08:00:07 - testbed-node-1 2026-04-17 08:00:07.565189 | orchestrator | 2026-04-17 08:00:07 - clean up keypairs 2026-04-17 08:00:07.585039 | orchestrator | 2026-04-17 08:00:07 - testbed 2026-04-17 08:00:07.610348 | orchestrator | 2026-04-17 08:00:07 - wait for servers to be gone 2026-04-17 08:00:18.611188 | orchestrator | 2026-04-17 08:00:18 - clean up ports 2026-04-17 08:00:18.817250 | orchestrator | 2026-04-17 08:00:18 - 079b40f9-8bfb-4a47-905e-ff258adf68ca 2026-04-17 08:00:19.086879 | orchestrator | 2026-04-17 08:00:19 - 19374b75-781e-4ba7-b0a7-005125eaac88 2026-04-17 08:00:19.399331 | orchestrator | 2026-04-17 08:00:19 - 1c27ec21-0f48-4fd1-9b2c-ddd33e8b294f 2026-04-17 08:00:19.924367 | orchestrator | 2026-04-17 08:00:19 - 821cfc1e-0d01-4c68-ba4b-b079fa2ad78b 2026-04-17 08:00:20.204391 | orchestrator | 2026-04-17 08:00:20 - dd198384-8465-4527-9b5a-81aa3b5a0090 2026-04-17 08:00:20.457135 | orchestrator | 2026-04-17 08:00:20 - eaebda7f-7735-4785-aed7-249b2a22c7bc 2026-04-17 08:00:21.181125 | orchestrator | 2026-04-17 08:00:21 - ee4b7ef4-0fb9-4115-aab4-acd00781834f 2026-04-17 08:00:21.393349 | orchestrator | 2026-04-17 08:00:21 - clean up volumes 2026-04-17 08:00:21.514898 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-3-node-base 2026-04-17 08:00:21.551020 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-5-node-base 2026-04-17 08:00:21.588026 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-2-node-base 2026-04-17 08:00:21.627583 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-manager-base 2026-04-17 08:00:21.665981 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-1-node-base 2026-04-17 08:00:21.704587 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-0-node-base 2026-04-17 08:00:21.744742 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-7-node-4 2026-04-17 08:00:21.785934 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-1-node-4 2026-04-17 08:00:21.829117 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-0-node-3 2026-04-17 08:00:21.874678 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-5-node-5 2026-04-17 08:00:21.916200 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-6-node-3 2026-04-17 08:00:21.959371 | orchestrator | 2026-04-17 08:00:21 - testbed-volume-3-node-3 2026-04-17 08:00:22.002692 | orchestrator | 2026-04-17 08:00:22 - testbed-volume-2-node-5 2026-04-17 08:00:22.047144 | orchestrator | 2026-04-17 08:00:22 - testbed-volume-4-node-4 2026-04-17 08:00:22.092646 | orchestrator | 2026-04-17 08:00:22 - testbed-volume-4-node-base 2026-04-17 08:00:22.139566 | orchestrator | 2026-04-17 08:00:22 - testbed-volume-8-node-5 2026-04-17 08:00:22.185880 | orchestrator | 2026-04-17 08:00:22 - disconnect routers 2026-04-17 08:00:22.818967 | orchestrator | 2026-04-17 08:00:22 - testbed 2026-04-17 08:00:24.349760 | orchestrator | 2026-04-17 08:00:24 - clean up subnets 2026-04-17 08:00:24.411855 | orchestrator | 2026-04-17 08:00:24 - subnet-testbed-management 2026-04-17 08:00:24.588310 | orchestrator | 2026-04-17 08:00:24 - clean up networks 2026-04-17 08:00:24.757239 | orchestrator | 2026-04-17 08:00:24 - net-testbed-management 2026-04-17 08:00:25.062953 | orchestrator | 2026-04-17 08:00:25 - clean up security groups 2026-04-17 08:00:25.112520 | orchestrator | 2026-04-17 08:00:25 - testbed-node 2026-04-17 08:00:25.222392 | orchestrator | 2026-04-17 08:00:25 - testbed-management 2026-04-17 08:00:25.336543 | orchestrator | 2026-04-17 08:00:25 - clean up floating ips 2026-04-17 08:00:25.383361 | orchestrator | 2026-04-17 08:00:25 - 81.163.192.96 2026-04-17 08:00:25.741309 | orchestrator | 2026-04-17 08:00:25 - clean up routers 2026-04-17 08:00:25.811338 | orchestrator | 2026-04-17 08:00:25 - testbed 2026-04-17 08:00:27.077709 | orchestrator | ok: Runtime: 0:00:20.924302 2026-04-17 08:00:27.081583 | 2026-04-17 08:00:27.081675 | PLAY RECAP 2026-04-17 08:00:27.081733 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-17 08:00:27.081757 | 2026-04-17 08:00:27.211904 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-17 08:00:27.213310 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-17 08:00:27.945612 | 2026-04-17 08:00:27.945780 | PLAY [Cleanup play] 2026-04-17 08:00:27.962184 | 2026-04-17 08:00:27.962328 | TASK [Set cloud fact (Zuul deployment)] 2026-04-17 08:00:28.014586 | orchestrator | ok 2026-04-17 08:00:28.021686 | 2026-04-17 08:00:28.021818 | TASK [Set cloud fact (local deployment)] 2026-04-17 08:00:28.076915 | orchestrator | skipping: Conditional result was False 2026-04-17 08:00:28.102935 | 2026-04-17 08:00:28.103214 | TASK [Clean the cloud environment] 2026-04-17 08:00:29.571780 | orchestrator | 2026-04-17 08:00:29 - clean up servers 2026-04-17 08:00:30.183573 | orchestrator | 2026-04-17 08:00:30 - clean up keypairs 2026-04-17 08:00:30.205345 | orchestrator | 2026-04-17 08:00:30 - wait for servers to be gone 2026-04-17 08:00:30.251547 | orchestrator | 2026-04-17 08:00:30 - clean up ports 2026-04-17 08:00:30.348934 | orchestrator | 2026-04-17 08:00:30 - clean up volumes 2026-04-17 08:00:30.428685 | orchestrator | 2026-04-17 08:00:30 - disconnect routers 2026-04-17 08:00:30.459224 | orchestrator | 2026-04-17 08:00:30 - clean up subnets 2026-04-17 08:00:30.479436 | orchestrator | 2026-04-17 08:00:30 - clean up networks 2026-04-17 08:00:31.129806 | orchestrator | 2026-04-17 08:00:31 - clean up security groups 2026-04-17 08:00:31.179444 | orchestrator | 2026-04-17 08:00:31 - clean up floating ips 2026-04-17 08:00:31.203115 | orchestrator | 2026-04-17 08:00:31 - clean up routers 2026-04-17 08:00:31.655424 | orchestrator | ok: Runtime: 0:00:02.325515 2026-04-17 08:00:31.659306 | 2026-04-17 08:00:31.659466 | PLAY RECAP 2026-04-17 08:00:31.659593 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-17 08:00:31.659654 | 2026-04-17 08:00:31.793535 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-17 08:00:31.794582 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-17 08:00:32.564043 | 2026-04-17 08:00:32.564223 | PLAY [Base post-fetch] 2026-04-17 08:00:32.580789 | 2026-04-17 08:00:32.580927 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-17 08:00:32.657334 | orchestrator | skipping: Conditional result was False 2026-04-17 08:00:32.672717 | 2026-04-17 08:00:32.672934 | TASK [fetch-output : Set log path for single node] 2026-04-17 08:00:32.720810 | orchestrator | ok 2026-04-17 08:00:32.730308 | 2026-04-17 08:00:32.730441 | LOOP [fetch-output : Ensure local output dirs] 2026-04-17 08:00:33.246807 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/7b8edaf9148748ce8bf9b3adbffd19c3/work/logs" 2026-04-17 08:00:33.515917 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7b8edaf9148748ce8bf9b3adbffd19c3/work/artifacts" 2026-04-17 08:00:33.807436 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7b8edaf9148748ce8bf9b3adbffd19c3/work/docs" 2026-04-17 08:00:33.822756 | 2026-04-17 08:00:33.822916 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-17 08:00:34.781084 | orchestrator | changed: .d..t...... ./ 2026-04-17 08:00:34.781508 | orchestrator | changed: All items complete 2026-04-17 08:00:34.781573 | 2026-04-17 08:00:35.545285 | orchestrator | changed: .d..t...... ./ 2026-04-17 08:00:36.325361 | orchestrator | changed: .d..t...... ./ 2026-04-17 08:00:36.346822 | 2026-04-17 08:00:36.346964 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-17 08:00:36.381795 | orchestrator | skipping: Conditional result was False 2026-04-17 08:00:36.384573 | orchestrator | skipping: Conditional result was False 2026-04-17 08:00:36.406420 | 2026-04-17 08:00:36.406545 | PLAY RECAP 2026-04-17 08:00:36.406615 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-17 08:00:36.406653 | 2026-04-17 08:00:36.571792 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-17 08:00:36.573074 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-17 08:00:37.343887 | 2026-04-17 08:00:37.344050 | PLAY [Base post] 2026-04-17 08:00:37.358726 | 2026-04-17 08:00:37.358894 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-17 08:00:38.370637 | orchestrator | changed 2026-04-17 08:00:38.379238 | 2026-04-17 08:00:38.379363 | PLAY RECAP 2026-04-17 08:00:38.379425 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-17 08:00:38.379485 | 2026-04-17 08:00:38.502980 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-17 08:00:38.503988 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-17 08:00:39.322546 | 2026-04-17 08:00:39.322716 | PLAY [Base post-logs] 2026-04-17 08:00:39.333501 | 2026-04-17 08:00:39.333629 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-17 08:00:39.885049 | localhost | changed 2026-04-17 08:00:39.901685 | 2026-04-17 08:00:39.901909 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-17 08:00:39.945849 | localhost | ok 2026-04-17 08:00:39.952055 | 2026-04-17 08:00:39.952271 | TASK [Set zuul-log-path fact] 2026-04-17 08:00:39.969652 | localhost | ok 2026-04-17 08:00:39.980903 | 2026-04-17 08:00:39.981040 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-17 08:00:40.016978 | localhost | ok 2026-04-17 08:00:40.023612 | 2026-04-17 08:00:40.023841 | TASK [upload-logs : Create log directories] 2026-04-17 08:00:40.533653 | localhost | changed 2026-04-17 08:00:40.538999 | 2026-04-17 08:00:40.539215 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-17 08:00:41.060942 | localhost -> localhost | ok: Runtime: 0:00:00.006576 2026-04-17 08:00:41.065091 | 2026-04-17 08:00:41.065231 | TASK [upload-logs : Upload logs to log server] 2026-04-17 08:00:41.670412 | localhost | Output suppressed because no_log was given 2026-04-17 08:00:41.672525 | 2026-04-17 08:00:41.672635 | LOOP [upload-logs : Compress console log and json output] 2026-04-17 08:00:41.726490 | localhost | skipping: Conditional result was False 2026-04-17 08:00:41.731887 | localhost | skipping: Conditional result was False 2026-04-17 08:00:41.742201 | 2026-04-17 08:00:41.742326 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-17 08:00:41.797308 | localhost | skipping: Conditional result was False 2026-04-17 08:00:41.797726 | 2026-04-17 08:00:41.803428 | localhost | skipping: Conditional result was False 2026-04-17 08:00:41.814628 | 2026-04-17 08:00:41.814948 | LOOP [upload-logs : Upload console log and json output]